Sample records for head-controlled human-computer interface

  1. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  2. Human-computer interface including haptically controlled interactions

    DOEpatents

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  3. Human computer interface guide, revision A

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.

  4. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  5. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  6. Human-machine interface hardware: The next decade

    NASA Technical Reports Server (NTRS)

    Marcus, Elizabeth A.

    1991-01-01

    In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.

  7. Evaluation of Head Orientation and Neck Muscle EMG Signals as Command Inputs to a Human-Computer Interface for Individuals with High Tetraplegia

    PubMed Central

    Williams, Matthew R.; Kirsch, Robert F.

    2013-01-01

    We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, EMG from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2D, center-out, Fitts’ Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application. PMID:18990652

  8. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  9. Formal specification of human-computer interfaces

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent

    1990-01-01

    A high-level formal specification of a human computer interface is described. Previous work is reviewed and the ASLAN specification language is described. Top-level specifications written in ASLAN for a library and a multiwindow interface are discussed.

  10. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    NASA Astrophysics Data System (ADS)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  11. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G [Albuquerque, NM

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  12. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G.

    2004-04-20

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  13. Control-display mapping in brain-computer interfaces.

    PubMed

    Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter

    2012-01-01

    Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.

  14. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  15. User participation in the development of the human/computer interface for control centers

    NASA Technical Reports Server (NTRS)

    Broome, Richard; Quick-Campbell, Marlene; Creegan, James; Dutilly, Robert

    1996-01-01

    Technological advances coupled with the requirements to reduce operations staffing costs led to the demand for efficient, technologically-sophisticated mission operations control centers. The control center under development for the earth observing system (EOS) is considered. The users are involved in the development of a control center in order to ensure that it is cost-efficient and flexible. A number of measures were implemented in the EOS program in order to encourage user involvement in the area of human-computer interface development. The following user participation exercises carried out in relation to the system analysis and design are described: the shadow participation of the programmers during a day of operations; the flight operations personnel interviews; and the analysis of the flight operations team tasks. The user participation in the interface prototype development, the prototype evaluation, and the system implementation are reported on. The involvement of the users early in the development process enables the requirements to be better understood and the cost to be reduced.

  16. Using the Electrocorticographic Speech Network to Control a Brain-Computer Interface in Humans

    PubMed Central

    Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin

    2013-01-01

    Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68 and 91% within 15 minutes. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive. PMID:21471638

  17. The design of an intelligent human-computer interface for the test, control and monitor system

    NASA Technical Reports Server (NTRS)

    Shoaff, William D.

    1988-01-01

    The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.

  18. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  19. The Human-Computer Interface and Information Literacy: Some Basics and Beyond.

    ERIC Educational Resources Information Center

    Church, Gary M.

    1999-01-01

    Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…

  20. Human performance interfaces in air traffic control.

    PubMed

    Chang, Yu-Hern; Yeh, Chung-Hsing

    2010-01-01

    This paper examines how human performance factors in air traffic control (ATC) affect each other through their mutual interactions. The paper extends the conceptual SHEL model of ergonomics to describe the ATC system as human performance interfaces in which the air traffic controllers interact with other human performance factors including other controllers, software, hardware, environment, and organisation. New research hypotheses about the relationships between human performance interfaces of the system are developed and tested on data collected from air traffic controllers, using structural equation modelling. The research result suggests that organisation influences play a more significant role than individual differences or peer influences on how the controllers interact with the software, hardware, and environment of the ATC system. There are mutual influences between the controller-software, controller-hardware, controller-environment, and controller-organisation interfaces of the ATC system, with the exception of the controller-controller interface. Research findings of this study provide practical insights in managing human performance interfaces of the ATC system in the face of internal or external change, particularly in understanding its possible consequences in relation to the interactions between human performance factors.

  1. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  2. Human-Computer Interface Controlled by Horizontal Directional Eye Movements and Voluntary Blinks Using AC EOG Signals

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yusuke; Murata, Hiroaki; Kimura, Haruhiko; Abe, Koji

    As a communication support tool for cases of amyotrophic lateral sclerosis (ALS), researches on eye gaze human-computer interfaces have been active. However, since voluntary and involuntary eye movements cannot be distinguished in the interfaces, their performance is still not sufficient for practical use. This paper presents a high performance human-computer interface system which unites high quality recognitions of horizontal directional eye movements and voluntary blinks. The experimental results have shown that the number of incorrect inputs is decreased by 35.1% in an existing system which equips recognitions of horizontal and vertical directional eye movements in addition to voluntary blinks and character inputs are speeded up by 17.4% from the existing system.

  3. Designing the user interface: strategies for effective human-computer interaction

    NASA Astrophysics Data System (ADS)

    Shneiderman, B.

    1998-03-01

    In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.

  4. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  5. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  6. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  7. Learning Machine, Vietnamese Based Human-Computer Interface.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…

  8. Quadcopter control in three-dimensional space using a noninvasive motor imagery based brain-computer interface

    PubMed Central

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-01-01

    Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712

  9. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  10. Assessment of a human computer interface prototyping environment

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1993-01-01

    A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.

  11. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  12. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  13. Region based Brain Computer Interface for a home control application.

    PubMed

    Akman Aydin, Eda; Bay, Omer Faruk; Guler, Inan

    2015-08-01

    Environment control is one of the important challenges for disabled people who suffer from neuromuscular diseases. Brain Computer Interface (BCI) provides a communication channel between the human brain and the environment without requiring any muscular activation. The most important expectation for a home control application is high accuracy and reliable control. Region-based paradigm is a stimulus paradigm based on oddball principle and requires selection of a target at two levels. This paper presents an application of region based paradigm for a smart home control application for people with neuromuscular diseases. In this study, a region based stimulus interface containing 49 commands was designed. Five non-disabled subjects were attended to the experiments. Offline analysis results of the experiments yielded 95% accuracy for five flashes. This result showed that region based paradigm can be used to select commands of a smart home control application with high accuracy in the low number of repetitions successfully. Furthermore, a statistically significant difference was not observed between the level accuracies.

  14. Human perceptual deficits as factors in computer interface test and evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less

  15. Perspectives on Human-Computer Interface: Introduction and Overview.

    ERIC Educational Resources Information Center

    Harman, Donna; Lunin, Lois F.

    1992-01-01

    Discusses human-computer interfaces in information seeking that focus on end users, and provides an overview of articles in this section that (1) provide librarians and information specialists with guidelines for selecting information-seeking systems; (2) provide producers of information systems with directions for production or research; and (3)…

  16. US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1996-09-30

    A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunctionmore » with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.« less

  17. Human-computer interfaces applied to numerical solution of the Plateau problem

    NASA Astrophysics Data System (ADS)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  18. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    PubMed Central

    Víctor Rodrigo, Mercado-García

    2017-01-01

    Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861

  19. A head movement image (HMI)-controlled computer mouse for people with disabilities.

    PubMed

    Chen, Yu-Luen; Chen, Weoi-Luen; Kuo, Te-Son; Lai, Jin-Shin

    2003-02-04

    This study proposes image processing and microprocessor technology for use in developing a head movement image (HMI)-controlled computer mouse system for the spinal cord injured (SCI). The system controls the movement and direction of the mouse cursor by capturing head movement images using a marker installed on the user's headset. In the clinical trial, this new mouse system was compared with an infrared-controlled mouse system on various tasks with nine subjects with SCI. The results were favourable to the new mouse system. The differences between the new mouse system and the infrared-controlled mouse were reaching statistical significance in each of the test situations (p<0.05). The HMI-controlled computer mouse improves the input speed. People with disabilities need only wear the headset and move their heads to freely control the movement of the mouse cursor.

  20. Controller/Computer Interface with an Air-Ground Data Link

    DOT National Transportation Integrated Search

    1976-06-01

    This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...

  1. Gesture controlled human-computer interface for the disabled.

    PubMed

    Szczepaniak, Oskar M; Sawicki, Dariusz J

    2017-02-28

    The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  2. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.

  3. An efficient use of mixing model for computing the effective dielectric and thermal properties of the human head.

    PubMed

    Mishra, Varsha; Puthucheri, Smitha; Singh, Dharmendra

    2018-05-07

    As a preventive measure against the electromagnetic (EM) wave exposure to human body, EM radiation regulatory authorities such as ICNIRP and FCC defined the value of specific absorption rate (SAR) for the human head during EM wave exposure from mobile phone. SAR quantifies the absorption of EM waves in the human body and it mainly depends on the dielectric properties (ε', σ) of the corresponding tissues. The head part of the human body is more susceptible to EM wave exposure due to the usage of mobile phones. The human head is a complex structure made up of multiple tissues with intermixing of many layers; thus, the accurate measurement of permittivity (ε') and conductivity (σ) of the tissues of the human head is still a challenge. For computing the SAR, researchers are using multilayer model, which has some challenges for defining the boundary for layers. Therefore, in this paper, an attempt has been made to propose a method to compute effective complex permittivity of the human head in the range of 0.3 to 3.0 GHz by applying De-Loor mixing model. Similarly, for defining the thermal effect in the tissue, thermal properties of the human head have also been computed using the De-Loor mixing method. The effective dielectric and thermal properties of equivalent human head model are compared with the IEEE Std. 1528. Graphical abstract ᅟ.

  4. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    NASA Astrophysics Data System (ADS)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  5. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-08-01

    Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s-1. Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.

  6. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface.

    PubMed

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-08-01

    At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.

  7. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  8. User Language Considerations in Military Human-Computer Interface Design

    DTIC Science & Technology

    1988-06-30

    InterfatceDe~sign (rinclassilied i. PEASO2NAL AUTHOR(S) 11rinil 3. Pond_ & VWilliamK. Cbruvn _______ Ia. TYPE OF REFORT Ib. TIME COVERED 14 DAt( OP...report details the soldtar lanquagoiculli-o ’s,.tves of poDzibIo releivance to US Military 01IOCliveneSS. 0&poCiatty in thosesV,tqIm& wtth cit:1c~l...IMPLICATIONS OF BILINGUALISM 7. Stress Effects 7 Significance for the US Military 9 BILINGUALISM AND THE HUMAN-COMPUTER INTERFACE 11 Computer-specific

  9. A Framework and Implementation of User Interface and Human-Computer Interaction Instruction

    ERIC Educational Resources Information Center

    Peslak, Alan

    2005-01-01

    Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…

  10. Effects of Airport Tower Controller Decision Support Tool on Controllers Head-Up Time

    NASA Technical Reports Server (NTRS)

    Hayashi, Miwa; Cruz Lopez, Jose M.

    2013-01-01

    Despite that aircraft positions and movements can be easily monitored on the radar displays at major airports nowadays, it is still important for the air traffic control tower (ATCT) controllers to look outside the window as much as possible to assure safe operations of traffic management. The present paper investigates whether an introduction of the NASA's proposed Spot and Runway Departure Advisor (SARDA), a decision support tool for the ATCT controller, would increase or decrease the controllers' head-up time. SARDA provides the controller departure-release schedule advisories, i.e., when to release each departure aircraft in order to minimize individual aircraft's fuel consumption on taxiways and simultaneously maximize the overall runway throughput. The SARDA advisories were presented on electronic flight strips (EFS). To investigate effects on the head-up time, a human-in-the-loop simulation experiment with two retired ATCT controller participants was conducted in a high-fidelity ATCT cab simulator with 360-degree computer-generated out-the-window view. Each controller participant wore a wearable video camera on a side of their head with the camera facing forward. The video data were later used to calculate their line of sight at each moment and eventually identify their head-up times. Four sessions were run with the SARDA advisories, and four sessions were run without (baseline). Traffic-load levels were varied in each session. The same set of user interface - EFS and the radar displays - were used in both the advisory and baseline sessions to make them directly comparable. The paper reports the findings and discusses their implications.

  11. A brain-computer interface controlled mail client.

    PubMed

    Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Wang, Cong

    2013-01-01

    In this paper, we propose a brain-computer interface (BCI) based mail client. This system is controlled by hybrid features extracted from scalp-recorded electroencephalographic (EEG). We emulate the computer mouse by the motor imagery-based mu rhythm and the P300 potential. Furthermore, an adaptive P300 speller is included to provide text input function. With this BCI mail client, users can receive, read, write mails, as well as attach files in mail writing. The system has been tested on 3 subjects. Experimental results show that mail communication with this system is feasible.

  12. Control of a visual keyboard using an electrocorticographic brain-computer interface.

    PubMed

    Krusienski, Dean J; Shih, Jerry J

    2011-05-01

    Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.

  13. U.S. Army weapon systems human-computer interface style guide. Version 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1997-12-31

    A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4),more » in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.« less

  14. Human factors in air traffic control: problems at the interfaces.

    PubMed

    Shouksmith, George

    2003-10-01

    The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.

  15. Using Eye Movement to Control a Computer: A Design for a Lightweight Electro-Oculogram Electrode Array and Computer Interface

    PubMed Central

    Iáñez, Eduardo; Azorin, Jose M.; Perez-Vidal, Carlos

    2013-01-01

    This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. PMID:23843986

  16. Implanted Miniaturized Antenna for Brain Computer Interface Applications: Analysis and Design

    PubMed Central

    Zhao, Yujuan; Rennaker, Robert L.; Hutchens, Chris; Ibrahim, Tamer S.

    2014-01-01

    Implantable Brain Computer Interfaces (BCIs) are designed to provide real-time control signals for prosthetic devices, study brain function, and/or restore sensory information lost as a result of injury or disease. Using Radio Frequency (RF) to wirelessly power a BCI could widely extend the number of applications and increase chronic in-vivo viability. However, due to the limited size and the electromagnetic loss of human brain tissues, implanted miniaturized antennas suffer low radiation efficiency. This work presents simulations, analysis and designs of implanted antennas for a wireless implantable RF-powered brain computer interface application. The results show that thin (on the order of 100 micrometers thickness) biocompatible insulating layers can significantly impact the antenna performance. The proper selection of the dielectric properties of the biocompatible insulating layers and the implantation position inside human brain tissues can facilitate efficient RF power reception by the implanted antenna. While the results show that the effects of the human head shape on implanted antenna performance is somewhat negligible, the constitutive properties of the brain tissues surrounding the implanted antenna can significantly impact the electrical characteristics (input impedance, and operational frequency) of the implanted antenna. Three miniaturized antenna designs are simulated and demonstrate that maximum RF power of up to 1.8 milli-Watts can be received at 2 GHz when the antenna implanted around the dura, without violating the Specific Absorption Rate (SAR) limits. PMID:25079941

  17. Robot Control Through Brain Computer Interface For Patterns Generation

    NASA Astrophysics Data System (ADS)

    Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.

    2011-09-01

    A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.

  18. Glove-Enabled Computer Operations (GECO): Design and Testing of an Extravehicular Activity Glove Adapted for Human-Computer Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott

    2013-01-01

    The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants' mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99% in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text "chat" communications, manipulation of procedures/checklists, cataloguing/annotating images, scientific note taking, human-robot interaction, and control of suit and/or other EVA systems.

  19. Glove-Enabled Computer Operations (GECO): Design and Testing of an Extravehicular Activity Glove Adapted for Human-Computer Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott

    2013-01-01

    The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.

  20. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.

    PubMed

    Zander, Thorsten O; Kothe, Christian

    2011-04-01

    Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.

  1. SIG -- The Role of Human-Computer Interaction in Next-Generation Control Rooms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Jacques Hugo; Christian Richard

    2005-04-01

    The purpose of this CHI Special Interest Group (SIG) is to facilitate the convergence between human-computer interaction (HCI) and control room design. HCI researchers and practitioners actively need to infuse state-of-the-art interface technology into control rooms to meet usability, safety, and regulatory requirements. This SIG outlines potential HCI contributions to instrumentation and control (I&C) and automation in control rooms as well as to general control room design.

  2. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  3. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  4. Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain

    PubMed Central

    2016-01-01

    An all-chain-wireless brain-to-brain system (BTBS), which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP) based brain-computer interface (BCI) was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain. PMID:26982717

  5. Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain.

    PubMed

    Li, Guangye; Zhang, Dingguo

    2016-01-01

    An all-chain-wireless brain-to-brain system (BTBS), which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP) based brain-computer interface (BCI) was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain.

  6. Small computer interface to a stepper motor

    NASA Technical Reports Server (NTRS)

    Berry, Fred A., Jr.

    1986-01-01

    A Commodore VIC-20 computer has been interfaced with a stepper motor to provide an inexpensive stepper motor controller. Only eight transistors and two integrated circuits compose the interface. The software controls the parallel interface of the computer and provides the four phase drive signals for the motor. Optical sensors control the zeroing of the 12-inch turntable positioned by the controller. The computer calculates the position information and movement of the table and may be programmed in BASIC to execute automatic sequences.

  7. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  8. Videodisc-Computer Interfaces.

    ERIC Educational Resources Information Center

    Zollman, Dean

    1984-01-01

    Lists microcomputer-videodisc interfaces currently available from 26 sources, including home use systems connected through remote control jack and industrial/educational systems utilizing computer ports and new laser reflective and stylus technology. Information provided includes computer and videodisc type, language, authoring system, educational…

  9. Beyond intuitive anthropomorphic control: recent achievements using brain computer interface technologies

    NASA Astrophysics Data System (ADS)

    Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco

    2017-05-01

    Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.

  10. An intelligent multi-media human-computer dialogue system

    NASA Technical Reports Server (NTRS)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  11. Destabilization of Human Balance Control by Static and Dynamic Head Tilts

    NASA Technical Reports Server (NTRS)

    Paloski, William H.; Wood, Scott J.; Feiveson, Alan H.; Black, F. Owen; Hwang, Emma Y.; Reschke, Millard F.

    2004-01-01

    To better understand the effects of varying head movement frequencies on human balance control, 12 healthy adult humans were studied during static and dynamic (0.14,0.33,0.6 Hz) head tilts of +/-30deg in the pitch and roll planes. Postural sway was measured during upright stance with eyes closed and altered somatosensory inputs provided by a computerized dynamic posturography (CDP) system. Subjects were able to maintain upright stance with static head tilts, although postural sway was increased during neck extension. Postural stability was decreased during dynamic head tilts, and the degree of destabilization varied directly with increasing frequency of head tilt. In the absence of vision and accurate foot support surface inputs, postural stability may be compromised during dynamic head tilts due to a decreased ability of the vestibular system to discern the orientation of gravity.

  12. Towards Intelligent Environments: An Augmented Reality–Brain–Machine Interface Operated with a See-Through Head-Mount Display

    PubMed Central

    Takano, Kouji; Hata, Naoki; Kansaku, Kenji

    2011-01-01

    The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307

  13. Learning an Intermittent Control Strategy for Postural Balancing Using an EMG-Based Human-Computer Interface

    PubMed Central

    Asai, Yoshiyuki; Tateyama, Shota; Nomura, Taishin

    2013-01-01

    It has been considered that the brain stabilizes unstable body dynamics by regulating co-activation levels of antagonist muscles. Here we critically reexamined this established theory of impedance control in a postural balancing task using a novel EMG-based human-computer interface, in which subjects were asked to balance a virtual inverted pendulum using visual feedback information on the pendulum's position. The pendulum was actuated by a pair of antagonist joint torques determined in real-time by activations of the corresponding pair of antagonist ankle muscles of subjects standing upright. This motor-task raises a frustrated environment; a large feedback time delay in the sensorimotor loop, as a source of instability, might favor adopting the non-reactive, preprogrammed impedance control, but the ankle muscles are relatively hard to co-activate, which hinders subjects from adopting the impedance control. This study aimed at discovering how experimental subjects resolved this frustrated environment through motor learning. One third of subjects adapted to the balancing task in a way of the impedance-like control. It was remarkable, however, that the majority of subjects did not adopt the impedance control. Instead, they acquired a smart and energetically efficient strategy, in which two muscles were inactivated simultaneously at a sequence of optimal timings, leading to intermittent appearance of periods of time during which the pendulum was not actively actuated. Characterizations of muscle inactivations and the pendulum¡Çs sway showed that the strategy adopted by those subjects was a type of intermittent control that utilizes a stable manifold of saddle-type unstable upright equilibrium that appeared in the state space of the pendulum when the active actuation was turned off. PMID:23717398

  14. Eye Tracking Based Control System for Natural Human-Computer Interaction

    PubMed Central

    Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528

  15. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    PubMed

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  16. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  17. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  18. Effects of External Loads on Human Head Movement Control Systems

    NASA Technical Reports Server (NTRS)

    Nam, M. H.; Choi, O. M.

    1984-01-01

    The central and reflexive control strategies underlying movements were elucidated by studying the effects of external loads on human head movement control systems. Some experimental results are presented on dynamic changes weigh the addition of aviation helmet (SPH4) and lead weights (6 kg). Intended time-optimal movements, their dynamics and electromyographic activity of neck muscles in normal movements, and also in movements made with external weights applied to the head were measured. It was observed that, when the external loads were added, the subject went through complex adapting processes and the head movement trajectory and its derivatives reached steady conditions only after transient adapting period. The steady adapted state was reached after 15 to 20 seconds (i.e., 5 to 6 movements).

  19. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  20. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  1. Human/computer control of undersea teleoperators

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.

    1978-01-01

    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.

  2. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  3. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  4. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  5. [Design and implementation of controlling smart car systems using P300 brain-computer interface].

    PubMed

    Wang, Jinjia; Yang, Chengjie; Hu, Bei

    2013-04-01

    Using human electroencephalogram (EEG) to control external devices in order to achieve a variety of functions has been focus of the field of brain-computer interface (BCI) research. P300 is experiments which stimulate the eye to produce EEG by using letters flashing, and then identify the corresponding letters. In this paper, some improvements based on the P300 experiments were made??. Firstly, the matrix of flashing letters were modified into words which represent a certain sense. Secondly, the BCI2000 procedures were added with the corresponding source code. Thirdly, the smart car systems were designed using the radiofrequency signal. Finally it was realized that the evoked potentials were used to control the state of the smart car.

  6. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    DTIC Science & Technology

    2016-07-27

    synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot

  7. PC-based control unit for a head-mounted operating microscope for augmented-reality visualization in surgical navigation

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar

    2002-05-01

    Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.

  8. Ecological Interface Design for Computer Network Defense.

    PubMed

    Bennett, Kevin B; Bryant, Adam; Sushereba, Christen

    2018-05-01

    A prototype ecological interface for computer network defense (CND) was developed. Concerns about CND run high. Although there is a vast literature on CND, there is some indication that this research is not being translated into operational contexts. Part of the reason may be that CND has historically been treated as a strictly technical problem, rather than as a socio-technical problem. The cognitive systems engineering (CSE)/ecological interface design (EID) framework was used in the analysis and design of the prototype interface. A brief overview of CSE/EID is provided. EID principles of design (i.e., direct perception, direct manipulation and visual momentum) are described and illustrated through concrete examples from the ecological interface. Key features of the ecological interface include (a) a wide variety of alternative visual displays, (b) controls that allow easy, dynamic reconfiguration of these displays, (c) visual highlighting of functionally related information across displays, (d) control mechanisms to selectively filter massive data sets, and (e) the capability for easy expansion. Cyber attacks from a well-known data set are illustrated through screen shots. CND support needs to be developed with a triadic focus (i.e., humans interacting with technology to accomplish work) if it is to be effective. Iterative design and formal evaluation is also required. The discipline of human factors has a long tradition of success on both counts; it is time that HF became fully involved in CND. Direct application in supporting cyber analysts.

  9. Computer interface system

    NASA Technical Reports Server (NTRS)

    Anderson, T. O. (Inventor)

    1976-01-01

    An interface logic circuit permitting the transfer of information between two computers having asynchronous clocks is disclosed. The information transfer involves utilization of control signals (including request, return-response, ready) to generate properly timed data strobe signals. Noise problems are avoided because each control signal, upon receipt, is verified by at least two clock pulses at the receiving computer. If control signals are verified, a data strobe pulse is generated to accomplish a data transfer. Once initiated, the data strobe signal is properly completed independently of signal disturbances in the control signal initiating the data strobe signal. Completion of the data strobe signal is announced by automatic turn-off of a return-response control signal.

  10. Brain-Computer Interfaces in Medicine

    PubMed Central

    Shih, Jerry J.; Krusienski, Dean J.; Wolpaw, Jonathan R.

    2012-01-01

    Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. PMID:22325364

  11. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  12. Computer interface for mechanical arm

    NASA Technical Reports Server (NTRS)

    Derocher, W. L.; Zermuehlen, R. O.

    1978-01-01

    Man/machine interface commands computer-controlled mechanical arm. Remotely-controlled arm has six degrees of freedom and is controlled through "supervisory-control" mode, in which all motions of arm follow set of preprogramed sequences. For simplicity, few prescribed commands are required to accomplish entire operation. Applications include operating computer-controlled arm to handle radioactive of explosive materials or commanding arm to perform functions in hostile environments. Modified version using displays may be applied in medicine.

  13. An optical brain computer interface for environmental control.

    PubMed

    Ayaz, Hasan; Shewokis, Patricia A; Bunce, Scott; Onaral, Banu

    2011-01-01

    A brain computer interface (BCI) is a system that translates neurophysiological signals detected from the brain to supply input to a computer or to control a device. Volitional control of neural activity and its real-time detection through neuroimaging modalities are key constituents of BCI systems. The purpose of this study was to develop and test a new BCI design that utilizes intention-related cognitive activity within the dorsolateral prefrontal cortex using functional near infrared (fNIR) spectroscopy. fNIR is a noninvasive, safe, portable and affordable optical technique with which to monitor hemodynamic changes, in the brain's cerebral cortex. Because of its portability and ease of use, fNIR is amenable to deployment in ecologically valid natural working environments. We integrated a control paradigm in a computerized 3D virtual environment to augment interactivity. Ten healthy participants volunteered for a two day study in which they navigated a virtual environment with keyboard inputs, but were required to use the fNIR-BCI for interaction with virtual objects. Results showed that participants consistently utilized the fNIR-BCI with an overall success rate of 84% and volitionally increased their cerebral oxygenation level to trigger actions within the virtual environment.

  14. Adapting human-machine interfaces to user performance.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2008-01-01

    The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.

  15. Brain-computer interfaces in medicine.

    PubMed

    Shih, Jerry J; Krusienski, Dean J; Wolpaw, Jonathan R

    2012-03-01

    Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. Copyright © 2012 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  16. Language evolution and human-computer interaction

    NASA Technical Reports Server (NTRS)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  17. Multimodal neuroelectric interface development

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Rosipal, Roman; Clanton, Sam T.; Matthews, Bryan; Hibbs, Andrew D.; Matthews, Robert; Krupka, Michael

    2003-01-01

    We are developing electromyographic and electroencephalographic methods, which draw control signals for human-computer interfaces from the human nervous system. We have made progress in four areas: 1) real-time pattern recognition algorithms for decoding sequences of forearm muscle activity associated with control gestures; 2) signal-processing strategies for computer interfaces using electroencephalogram (EEG) signals; 3) a flexible computation framework for neuroelectric interface research; and d) noncontact sensors, which measure electromyogram or EEG signals without resistive contact to the body.

  18. Designing effective human-automation-plant interfaces: a control-theoretic perspective.

    PubMed

    Jamieson, Greg A; Vicente, Kim J

    2005-01-01

    In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.

  19. Motor prediction in Brain-Computer Interfaces for controlling mobile robots.

    PubMed

    Geng, Tao; Gan, John Q

    2008-01-01

    EEG-based Brain-Computer Interface (BCI) can be regarded as a new channel for motor control except that it does not involve muscles. Normal neuromuscular motor control has two fundamental components: (1) to control the body, and (2) to predict the consequences of the control command, which is called motor prediction. In this study, after training with a specially designed BCI paradigm based on motor imagery, two subjects learnt to predict the time course of some features of the EEG signals. It is shown that, with this newly-obtained motor prediction skill, subjects can use motor imagery of feet to directly control a mobile robot to avoid obstacles and reach a small target in a time-critical scenario.

  20. Human machine interface to manually drive rhombic like vehicles such as transport casks in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo

    2015-07-01

    The Cask and Plug Remote Handling System (CPRHS) and the respective Cask Transfer System (CTS) are designed to transport activated components between the reactor and the hot cell buildings of ITER during maintenance operations. In nominal operation, the CPRHS/CTS shall operate autonomously under human supervision. However, in some unexpected situations, the automatic mode must be overridden and the vehicle must be remotely guided by a human operator due to the harsh conditions of the environment. The CPRHS/CTS is a rhombic-like vehicle with two independent steerable and drivable wheels along its longitudinal axis, giving it omni-directional capabilities. During manual guidance, themore » human operator has to deal with four degrees of freedom, namely the orientations and speeds of two wheels. This work proposes a Human Machine Interface (HMI) to manage the degrees of freedom and to remotely guide the CPRHS/CTS in ITER taking the most advantages of rhombic like capabilities. Previous work was done to drive each wheel independently, i.e., control the orientation and speed of each wheel independently. The results have shown that the proposed solution is inefficient. The attention of the human operator becomes focused in a single wheel. In addition, the proposed solution cannot assure that the commands accomplish the physical constrains of the vehicle, resulting in slippage or even in clashes. This work proposes a solution that consists in the control of the vehicle looking at the position of its center of mass and its heading in the world frame. The solution is implemented using a rotational disk to control the vehicle heading and a common analogue joystick to control the vector speed of the center of the mass of the vehicle. The number of degrees of freedom reduces to three, i.e., two angles (vehicle heading and the orientation of the vector speed) and a scalar (the magnitude of the speed vector). This is possible using a kinematic model based on the vehicle

  1. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms.

    PubMed

    Royer, Audrey S; He, Bin

    2009-02-01

    In a brain-computer interface (BCI) utilizing a process control strategy, the signal from the cortex is used to control the fine motor details normally handled by other parts of the brain. In a BCI utilizing a goal selection strategy, the signal from the cortex is used to determine the overall end goal of the user, and the BCI controls the fine motor details. A BCI based on goal selection may be an easier and more natural system than one based on process control. Although goal selection in theory may surpass process control, the two have never been directly compared, as we are reporting here. Eight young healthy human subjects participated in the present study, three trained and five naïve in BCI usage. Scalp-recorded electroencephalograms (EEG) were used to control a computer cursor during five different paradigms. The paradigms were similar in their underlying signal processing and used the same control signal. However, three were based on goal selection, and two on process control. For both the trained and naïve populations, goal selection had more hits per run, was faster, more accurate (for seven out of eight subjects) and had a higher information transfer rate than process control. Goal selection outperformed process control in every measure studied in the present investigation.

  2. Brain-computer interfaces in the continuum of consciousness.

    PubMed

    Kübler, Andrea; Kotchoubey, Boris

    2007-12-01

    To summarize recent developments and look at important future aspects of brain-computer interfaces. Recent brain-computer interface studies are largely targeted at helping severely or even completely paralysed patients. The former are only able to communicate yes or no via a single muscle twitch, and the latter are totally nonresponsive. Such patients can control brain-computer interfaces and use them to select letters, words or items on a computer screen, for neuroprosthesis control or for surfing the Internet. This condition of motor paralysis, in which cognition and consciousness appear to be unaffected, is traditionally opposed to nonresponsiveness due to disorders of consciousness. Although these groups of patients may appear to be very alike, numerous transition states between them are demonstrated by recent studies. All nonresponsive patients can be regarded on a continuum of consciousness which may vary even within short time periods. As overt behaviour is lacking, cognitive functions in such patients can only be investigated using neurophysiological methods. We suggest that brain-computer interfaces may provide a new tool to investigate cognition in disorders of consciousness, and propose a hierarchical procedure entailing passive stimulation, active instructions, volitional paradigms, and brain-computer interface operation.

  3. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  4. Implantable brain computer interface: challenges to neurotechnology translation.

    PubMed

    Konrad, Peter; Shanks, Todd

    2010-06-01

    This article reviews three concepts related to implantable brain computer interface (BCI) devices being designed for human use: neural signal extraction primarily for motor commands, signal insertion to restore sensation, and technological challenges that remain. A significant body of literature has occurred over the past four decades regarding motor cortex signal extraction for upper extremity movement or computer interface. However, little is discussed regarding postural or ambulation command signaling. Auditory prosthesis research continues to represent the majority of literature on BCI signal insertion. Significant hurdles continue in the technological translation of BCI implants. These include developing a stable neural interface, significantly increasing signal processing capabilities, and methods of data transfer throughout the human body. The past few years, however, have provided extraordinary human examples of BCI implant potential. Despite technological hurdles, proof-of-concept animal and human studies provide significant encouragement that BCI implants may well find their way into mainstream medical practice in the foreseeable future.

  5. Issues in Afloat Command Control: The Computer-Commander Interface

    DTIC Science & Technology

    1979-03-01

    LEVEL NAVAL POSTGRADUATE SCHOOL Monterey, California SEP 1 1979 Jl THESIS SC ISSUES IN AFLOAT COMMAND CONTROL: LUJ THE COMPUTER-COMMANDER INTERFACE...jJ./Hurley I?. DISORIUUO" AN ST A TEMENT e . ur.o i .AN As i, ’a’ P",M,,nI_..I ,, T. TA4R IS.2 SUPLMNTR NOTUES Naval Postgraduate School ,Monterey...California 93940 Naval Postgraduate School Monterey, California 93940 8d1u ..... . 1. ThRisa thesiCs exAM•ines afloat command €.. ,1. Scn CUtTr CLASS

  6. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia*

    PubMed Central

    Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J

    2010-01-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583

  7. Brain-computer interface control along instructed paths

    NASA Astrophysics Data System (ADS)

    Sadtler, P. T.; Ryu, S. I.; Tyler-Kabara, E. C.; Yu, B. M.; Batista, A. P.

    2015-02-01

    Objective. Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the user’s intent, and we can increase the richness of the BCI movement repertoire. Approach. We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics. Main results. We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task. Significance. The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.

  8. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    PubMed

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  9. A Human Machine Interface for EVA

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit

  10. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  11. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  12. Passive wireless tags for tongue controlled assistive technology interfaces.

    PubMed

    Rakibet, Osman O; Horne, Robert J; Kelly, Stephen W; Batchelor, John C

    2016-03-01

    Tongue control with low profile, passive mouth tags is demonstrated as a human-device interface by communicating values of tongue-tag separation over a wireless link. Confusion matrices are provided to demonstrate user accuracy in targeting by tongue position. Accuracy is found to increase dramatically after short training sequences with errors falling close to 1% in magnitude with zero missed targets. The rate at which users are able to learn accurate targeting with high accuracy indicates that this is an intuitive device to operate. The significance of the work is that innovative very unobtrusive, wireless tags can be used to provide intuitive human-computer interfaces based on low cost and disposable mouth mounted technology. With the development of an appropriate reading system, control of assistive devices such as computer mice or wheelchairs could be possible for tetraplegics and others who retain fine motor control capability of their tongues. The tags contain no battery and are intended to fit directly on the hard palate, detecting tongue position in the mouth with no need for tongue piercings.

  13. A brain computer interface using electrocorticographic signals in humans

    NASA Astrophysics Data System (ADS)

    Leuthardt, Eric C.; Schalk, Gerwin; Wolpaw, Jonathan R.; Ojemann, Jeffrey G.; Moran, Daniel W.

    2004-06-01

    Brain-computer interfaces (BCIs) enable users to control devices with electroencephalographic (EEG) activity from the scalp or with single-neuron activity from within the brain. Both methods have disadvantages: EEG has limited resolution and requires extensive training, while single-neuron recording entails significant clinical risks and has limited stability. We demonstrate here for the first time that electrocorticographic (ECoG) activity recorded from the surface of the brain can enable users to control a one-dimensional computer cursor rapidly and accurately. We first identified ECoG signals that were associated with different types of motor and speech imagery. Over brief training periods of 3-24 min, four patients then used these signals to master closed-loop control and to achieve success rates of 74-100% in a one-dimensional binary task. In additional open-loop experiments, we found that ECoG signals at frequencies up to 180 Hz encoded substantial information about the direction of two-dimensional joystick movements. Our results suggest that an ECoG-based BCI could provide for people with severe motor disabilities a non-muscular communication and control option that is more powerful than EEG-based BCIs and is potentially more stable and less traumatic than BCIs that use electrodes penetrating the brain. The authors declare that they have no competing financial interests.

  14. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head

    PubMed Central

    Beard, Brian B.; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2018-01-01

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position. PMID:29515260

  15. Comparisons of Computed Mobile Phone Induced SAR in the SAM Phantom to That in Anatomically Correct Models of the Human Head.

    PubMed

    Beard, Brian B; Kainz, Wolfgang; Onishi, Teruo; Iyama, Takahiro; Watanabe, Soichi; Fujiwara, Osamu; Wang, Jianqing; Bit-Babik, Giorgi; Faraone, Antonio; Wiart, Joe; Christ, Andreas; Kuster, Niels; Lee, Ae-Kyoung; Kroeze, Hugo; Siegbahn, Martin; Keshvari, Jafar; Abrishamkar, Houman; Simon, Winfried; Manteuffel, Dirk; Nikoloski, Neviana

    2006-06-05

    The specific absorption rates (SAR) determined computationally in the specific anthropomorphic mannequin (SAM) and anatomically correct models of the human head when exposed to a mobile phone model are compared as part of a study organized by IEEE Standards Coordinating Committee 34, SubCommittee 2, and Working Group 2, and carried out by an international task force comprising 14 government, academic, and industrial research institutions. The detailed study protocol defined the computational head and mobile phone models. The participants used different finite-difference time-domain software and independently positioned the mobile phone and head models in accordance with the protocol. The results show that when the pinna SAR is calculated separately from the head SAR, SAM produced a higher SAR in the head than the anatomically correct head models. Also the larger (adult) head produced a statistically significant higher peak SAR for both the 1- and 10-g averages than did the smaller (child) head for all conditions of frequency and position.

  16. Modulation of Posterior Alpha Activity by Spatial Attention Allows for Controlling A Continuous Brain-Computer Interface.

    PubMed

    Horschig, Jörn M; Oosterheert, Wouter; Oostenveld, Robert; Jensen, Ole

    2015-11-01

    Here we report that the modulation of alpha activity by covert attention can be used as a control signal in an online brain-computer interface, that it is reliable, and that it is robust. Subjects were instructed to orient covert visual attention to the left or right hemifield. We decoded the direction of attention from the magnetoencephalogram by a template matching classifier and provided the classification outcome to the subject in real-time using a novel graphical user interface. Training data for the templates were obtained from a Posner-cueing task conducted just before the BCI task. Eleven subjects participated in four sessions each. Eight of the subjects achieved classification rates significantly above chance level. Subjects were able to significantly increase their performance from the first to the second session. Individual patterns of posterior alpha power remained stable throughout the four sessions and did not change with increased performance. We conclude that posterior alpha power can successfully be used as a control signal in brain-computer interfaces. We also discuss several ideas for further improving the setup and propose future research based on solid hypotheses about behavioral consequences of modulating neuronal oscillations by brain computer interfacing.

  17. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    NASA Astrophysics Data System (ADS)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  18. Goal selection versus process control while learning to use a brain-computer interface

    NASA Astrophysics Data System (ADS)

    Royer, Audrey S.; Rose, Minn L.; He, Bin

    2011-06-01

    A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.

  19. Head-Disk Interface Technology: Challenges and Approaches

    NASA Astrophysics Data System (ADS)

    Liu, Bo

    Magnetic hard disk drive (HDD) technology is believed to be one of the most successful examples of modern mechatronics systems. The mechanical beauty of magnetic HDD includes simple but super high accuracy positioning head, positioning technology, high speed and stability spindle motor technology, and head-disk interface technology which keeps the millimeter sized slider flying over a disk surface at nanometer level slider-disk spacing. This paper addresses the challenges and possible approaches on how to further reduce the slider disk spacing whilst retaining the stability and robustness level of head-disk systems for future advanced magnetic disk drives.

  20. PointCom: semi-autonomous UGV control with intuitive interface

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham

    2008-04-01

    Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).

  1. Telepresence: A "Real" Component in a Model to Make Human-Computer Interface Factors Meaningful in the Virtual Learning Environment

    ERIC Educational Resources Information Center

    Selverian, Melissa E. Markaridian; Lombard, Matthew

    2009-01-01

    A thorough review of the research relating to Human-Computer Interface (HCI) form and content factors in the education, communication and computer science disciplines reveals strong associations of meaningful perceptual "illusions" with enhanced learning and satisfaction in the evolving classroom. Specifically, associations emerge…

  2. Young Children's Skill in Using a Mouse to Control a Graphical Computer Interface.

    ERIC Educational Resources Information Center

    Crook, Charles

    1992-01-01

    Describes a study that investigated the performance of preschoolers and children in the first three years of formal education on tasks that involved skills using a mouse-based control of a graphical computer interface. The children's performance is compared with that of novice adult users and expert users. (five references) (LRW)

  3. Computer-Based Tools for Evaluating Graphical User Interfaces

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  4. Design of an efficient framework for fast prototyping of customized human-computer interfaces and virtual environments for rehabilitation.

    PubMed

    Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe

    2013-06-01

    Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Evolution of brain-computer interfaces: going beyond classic motor physiology

    PubMed Central

    Leuthardt, Eric C.; Schalk, Gerwin; Roland, Jarod; Rouse, Adam; Moran, Daniel W.

    2010-01-01

    The notion that a computer can decode brain signals to infer the intentions of a human and then enact those intentions directly through a machine is becoming a realistic technical possibility. These types of devices are known as brain-computer interfaces (BCIs). The evolution of these neuroprosthetic technologies could have significant implications for patients with motor disabilities by enhancing their ability to interact and communicate with their environment. The cortical physiology most investigated and used for device control has been brain signals from the primary motor cortex. To date, this classic motor physiology has been an effective substrate for demonstrating the potential efficacy of BCI-based control. However, emerging research now stands to further enhance our understanding of the cortical physiology underpinning human intent and provide further signals for more complex brain-derived control. In this review, the authors report the current status of BCIs and detail the emerging research trends that stand to augment clinical applications in the future. PMID:19569892

  6. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.

    2008-12-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.

  7. A human factors approach to range scheduling for satellite control

    NASA Technical Reports Server (NTRS)

    Wright, Cameron H. G.; Aitken, Donald J.

    1991-01-01

    Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.

  8. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces

    PubMed Central

    Heo, Jeong; Yoon, Heenam; Park, Kwang Suk

    2017-01-01

    Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain–computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles. PMID:28644398

  9. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.

    PubMed

    Heo, Jeong; Yoon, Heenam; Park, Kwang Suk

    2017-06-23

    Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.

  10. Intermittent control: a computational theory of human control.

    PubMed

    Gawthrop, Peter; Loram, Ian; Lakie, Martin; Gollee, Henrik

    2011-02-01

    The paradigm of continuous control using internal models has advanced understanding of human motor control. However, this paradigm ignores some aspects of human control, including intermittent feedback, serial ballistic control, triggered responses and refractory periods. It is shown that event-driven intermittent control provides a framework to explain the behaviour of the human operator under a wider range of conditions than continuous control. Continuous control is included as a special case, but sampling, system matched hold, an intermittent predictor and an event trigger allow serial open-loop trajectories using intermittent feedback. The implementation here may be described as "continuous observation, intermittent action". Beyond explaining unimodal regulation distributions in common with continuous control, these features naturally explain refractoriness and bimodal stabilisation distributions observed in double stimulus tracking experiments and quiet standing, respectively. Moreover, given that human control systems contain significant time delays, a biological-cybernetic rationale favours intermittent over continuous control: intermittent predictive control is computationally less demanding than continuous predictive control. A standard continuous-time predictive control model of the human operator is used as the underlying design method for an event-driven intermittent controller. It is shown that when event thresholds are small and sampling is regular, the intermittent controller can masquerade as the underlying continuous-time controller and thus, under these conditions, the continuous-time and intermittent controller cannot be distinguished. This explains why the intermittent control hypothesis is consistent with the continuous control hypothesis for certain experimental conditions.

  11. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    PubMed

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  12. Brain Computer Interfaces, a Review

    PubMed Central

    Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime

    2012-01-01

    A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or ‘locked in’ by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices. PMID:22438708

  13. Brain computer interfaces, a review.

    PubMed

    Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime

    2012-01-01

    A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.

  14. Eye-movements and Voice as Interface Modalities to Computer Systems

    NASA Astrophysics Data System (ADS)

    Farid, Mohsen M.; Murtagh, Fionn D.

    2003-03-01

    We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.

  15. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  16. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  17. Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.

    PubMed

    Griffiths, Paul G; Gillespie, R Brent

    2005-01-01

    This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.

  18. An Analytical Calculation of Frictional and Bending Moments at the Head-Neck Interface of Hip Joint Implants during Different Physiological Activities.

    PubMed

    Farhoudi, Hamidreza; Oskouei, Reza H; Pasha Zanoosi, Ali A; Jones, Claire F; Taylor, Mark

    2016-12-05

    This study predicts the frictional moments at the head-cup interface and frictional torques and bending moments acting on the head-neck interface of a modular total hip replacement across a range of activities of daily living. The predicted moment and torque profiles are based on the kinematics of four patients and the implant characteristics of a metal-on-metal implant. Depending on the body weight and type of activity, the moments and torques had significant variations in both magnitude and direction over the activity cycles. For the nine investigated activities, the maximum magnitude of the frictional moment ranged from 2.6 to 7.1 Nm. The maximum magnitude of the torque acting on the head-neck interface ranged from 2.3 to 5.7 Nm. The bending moment acting on the head-neck interface varied from 7 to 21.6 Nm. One-leg-standing had the widest range of frictional torque on the head-neck interface (11 Nm) while normal walking had the smallest range (6.1 Nm). The widest range, together with the maximum magnitude of torque, bending moment, and frictional moment, occurred during one-leg-standing of the lightest patient. Most of the simulated activities resulted in frictional torques that were near the previously reported oxide layer depassivation threshold torque. The predicted bending moments were also found at a level believed to contribute to the oxide layer depassivation. The calculated magnitudes and directions of the moments, applied directly to the head-neck taper junction, provide realistic mechanical loading data for in vitro and computational studies on the mechanical behaviour and multi-axial fretting at the head-neck interface.

  19. An Analytical Calculation of Frictional and Bending Moments at the Head-Neck Interface of Hip Joint Implants during Different Physiological Activities

    PubMed Central

    Farhoudi, Hamidreza; Oskouei, Reza H.; Pasha Zanoosi, Ali A.; Jones, Claire F.; Taylor, Mark

    2016-01-01

    This study predicts the frictional moments at the head-cup interface and frictional torques and bending moments acting on the head-neck interface of a modular total hip replacement across a range of activities of daily living. The predicted moment and torque profiles are based on the kinematics of four patients and the implant characteristics of a metal-on-metal implant. Depending on the body weight and type of activity, the moments and torques had significant variations in both magnitude and direction over the activity cycles. For the nine investigated activities, the maximum magnitude of the frictional moment ranged from 2.6 to 7.1 Nm. The maximum magnitude of the torque acting on the head-neck interface ranged from 2.3 to 5.7 Nm. The bending moment acting on the head-neck interface varied from 7 to 21.6 Nm. One-leg-standing had the widest range of frictional torque on the head-neck interface (11 Nm) while normal walking had the smallest range (6.1 Nm). The widest range, together with the maximum magnitude of torque, bending moment, and frictional moment, occurred during one-leg-standing of the lightest patient. Most of the simulated activities resulted in frictional torques that were near the previously reported oxide layer depassivation threshold torque. The predicted bending moments were also found at a level believed to contribute to the oxide layer depassivation. The calculated magnitudes and directions of the moments, applied directly to the head-neck taper junction, provide realistic mechanical loading data for in vitro and computational studies on the mechanical behaviour and multi-axial fretting at the head-neck interface. PMID:28774104

  20. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    NASA Astrophysics Data System (ADS)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet

  1. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.

    PubMed

    Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A

    2014-12-01

    To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  2. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  3. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  4. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  5. Technology Roadmap Instrumentation, Control, and Human-Machine Interface to Support DOE Advanced Nuclear Energy Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donald D Dudenhoeffer; Burce P Hallbert

    Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less

  6. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  7. A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    PubMed Central

    Corti, Kevin; Gillespie, Alex

    2015-01-01

    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066

  8. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  9. Head pose estimation in computer vision: a survey.

    PubMed

    Murphy-Chutorian, Erik; Trivedi, Mohan Manubhai

    2009-04-01

    The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments.

  10. Control of a brain-computer interface using stereotactic depth electrodes in and adjacent to the hippocampus

    NASA Astrophysics Data System (ADS)

    Krusienski, D. J.; Shih, J. J.

    2011-04-01

    A brain-computer interface (BCI) is a device that enables severely disabled people to communicate and interact with their environments using their brain waves. Most research investigating BCI in humans has used scalp-recorded electroencephalography or intracranial electrocorticography. The use of brain signals obtained directly from stereotactic depth electrodes to control a BCI has not previously been explored. In this study, event-related potentials (ERPs) recorded from bilateral stereotactic depth electrodes implanted in and adjacent to the hippocampus were used to control a P300 Speller paradigm. The ERPs were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in the two subjects tested. Our results demonstrate that ERPs from hippocampal and hippocampal adjacent depth electrodes can be used to reliably control the P300 Speller BCI paradigm.

  11. Virtually-augmented interfaces for tactical aircraft.

    PubMed

    Haas, M W

    1995-05-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.

  12. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.

  13. A distributed, graphical user interface based, computer control system for atomic physics experiments

    NASA Astrophysics Data System (ADS)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  14. A distributed, graphical user interface based, computer control system for atomic physics experiments.

    PubMed

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  15. Human/Computer Interfacing in Educational Environments.

    ERIC Educational Resources Information Center

    Sarti, Luigi

    1992-01-01

    This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…

  16. Big data challenges in decoding cortical activity in a human with quadriplegia to inform a brain computer interface.

    PubMed

    Friedenberg, David A; Bouton, Chad E; Annetta, Nicholas V; Skomrock, Nicholas; Mingming Zhang; Schwemmer, Michael; Bockbrader, Marcia A; Mysiw, W Jerry; Rezai, Ali R; Bresler, Herbert S; Sharma, Gaurav

    2016-08-01

    Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.

  17. MARTI: man-machine animation real-time interface

    NASA Astrophysics Data System (ADS)

    Jones, Christian M.; Dlay, Satnam S.

    1997-05-01

    The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.

  18. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  19. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    PubMed

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  20. Ocular attention-sensing interface system

    NASA Technical Reports Server (NTRS)

    Zaklad, Allen; Glenn, Floyd A., III; Iavecchia, Helene P.; Stokes, James M.

    1986-01-01

    The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload.

  1. [Research of controlling of smart home system based on P300 brain-computer interface].

    PubMed

    Wang, Jinjia; Yang, Chengjie

    2014-08-01

    Using electroencephalogram (EEG) signal to control external devices has always been the research focus in the field of brain-computer interface (BCI). This is especially significant for those disabilities who have lost capacity of movements. In this paper, the P300-based BCI and the microcontroller-based wireless radio frequency (RF) technology are utilized to design a smart home control system, which can be used to control household appliances, lighting system, and security devices directly. Experiment results showed that the system was simple, reliable and easy to be populirised.

  2. Simulation of the human-telerobot interface

    NASA Technical Reports Server (NTRS)

    Stuart, Mark A.; Smith, Randy L.

    1988-01-01

    A part of NASA's Space Station will be a Flight Telerobotic Servicer (FTS) used to help assemble, service, and maintain the Space Station. Since the human operator will be required to control the FTS, the design of the human-telerobot interface must be optimized from a human factors perspective. Simulation has been used as an aid in the development of complex systems. Simulation has been especially useful when it has been applied to the development of complex systems. Simulation should ensure that the hardware and software components of the human-telerobot interface have been designed and selected so that the operator's capabilities and limitations have been accommodated for since this is a complex system where few direct comparisons to existent systems can be made. Three broad areas of the human-telerobot interface where simulation can be of assistance are described. The use of simulation not only can result in a well-designed human-telerobot interface, but also can be used to ensure that components have been selected to best meet system's goals, and for operator training.

  3. On the Use of Electrooculogram for Efficient Human Computer Interfaces

    PubMed Central

    Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.

    2010-01-01

    The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687

  4. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  5. Human-computer interface for the study of information fusion concepts in situation analysis and command decision support systems

    NASA Astrophysics Data System (ADS)

    Roy, Jean; Breton, Richard; Paradis, Stephane

    2001-08-01

    Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.

  6. An EOG-Based Human-Machine Interface for Wheelchair Control.

    PubMed

    Huang, Qiyun; He, Shenghong; Wang, Qihong; Gu, Zhenghui; Peng, Nengneng; Li, Kai; Zhang, Yuandong; Shao, Ming; Li, Yuanqing

    2017-07-27

    Non-manual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG)-based HMI is proposed for wheelchair control. Thirteen flashing buttons are presented in the graphical user interface (GUI), and each of the buttons corresponds to a command. These buttons flash on a one-by-one manner in a pre-defined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. For healthy subjects/patients with spinal cord injuries (SCIs), the proposed HMI achieved an average accuracy of 96.7%/91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Using only one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. The proposed HMI provides a novel non-manual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR and fewer electrodes.

  7. Interfacing laboratory instruments to multiuser, virtual memory computers

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Stang, David B.; Roth, Don J.

    1989-01-01

    Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.

  8. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    NASA Astrophysics Data System (ADS)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  9. The Self-Paced Graz Brain-Computer Interface: Methods and Applications

    PubMed Central

    Scherer, Reinhold; Schloegl, Alois; Lee, Felix; Bischof, Horst; Janša, Janez; Pfurtscheller, Gert

    2007-01-01

    We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth. PMID:18350133

  10. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    NASA Astrophysics Data System (ADS)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  11. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces.

    PubMed

    Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.

  12. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces

    PubMed Central

    Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776

  13. A Multi-purpose Brain-Computer Interface Output Device

    PubMed Central

    Thompson, David E; Huggins, Jane E

    2012-01-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as standalone communication and control systems, rather than as interfaces to existing systems built for these purposes. While an individual communication and control system may be powerful or flexible, no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCIs could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e. without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems. PMID:22208120

  14. A multi-purpose brain-computer interface output device.

    PubMed

    Thompson, David E; Huggins, Jane E

    2011-10-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as stand-alone communication and control systems, rather than as interfaces to existing systems built for these purposes. An individual communication and control system may be powerful or flexible, but no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCls could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e., without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems.

  15. Software and Human-Machine Interface Development for Environmental Controls Subsystem Support

    NASA Technical Reports Server (NTRS)

    Dobson, Matthew

    2018-01-01

    The Space Launch System (SLS) is the next premier launch vehicle for NASA. It is the next stage of manned space exploration from American soil, and will be the platform in which we push further beyond Earth orbit. In preparation of the SLS maiden voyage on Exploration Mission 1 (EM-1), the existing ground support architecture at Kennedy Space Center required significant overhaul and updating. A comprehensive upgrade of controls systems was necessary, including programmable logic controller software, as well as Launch Control Center (LCC) firing room and local launch pad displays for technician use. Environmental control acts as an integral component in these systems, being the foremost system for conditioning the pad and extremely sensitive launch vehicle until T-0. The Environmental Controls Subsystem (ECS) required testing and modification to meet the requirements of the designed system, as well as the human factors requirements of NASA software for Validation and Verification (V&V). This term saw significant strides in the progress and functionality of the human-machine interfaces used at the launch pad, and improved integration with the controller code.

  16. Guidance for human interface with artificial intelligence systems

    NASA Technical Reports Server (NTRS)

    Potter, Scott S.; Woods, David D.

    1991-01-01

    The beginning of a research effort to collect and integrate existing research findings about how to combine computer power and people is discussed, including problems and pitfalls as well as desirable features. The goal of the research is to develop guidance for the design of human interfaces with intelligent systems. Fault management tasks in NASA domains are the focus of the investigation. Research is being conducted to support the development of guidance for designers that will enable them to make human interface considerations into account during the creation of intelligent systems.

  17. Design of a mobile brain computer interface-based smart multimedia controller.

    PubMed

    Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-03-06

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.

  18. Eye-gaze control of the computer interface: Discrimination of zoom intent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-10-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at amore » statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.« less

  19. Rapid P300 brain-computer interface communication with a head-mounted display

    PubMed Central

    Käthner, Ivo; Kübler, Andrea; Halder, Sebastian

    2015-01-01

    Visual ERP (P300) based brain-computer interfaces (BCIs) allow for fast and reliable spelling and are intended as a muscle-independent communication channel for people with severe paralysis. However, they require the presentation of visual stimuli in the field of view of the user. A head-mounted display could allow convenient presentation of visual stimuli in situations, where mounting a conventional monitor might be difficult or not feasible (e.g., at a patient's bedside). To explore if similar accuracies can be achieved with a virtual reality (VR) headset compared to a conventional flat screen monitor, we conducted an experiment with 18 healthy participants. We also evaluated it with a person in the locked-in state (LIS) to verify that usage of the headset is possible for a severely paralyzed person. Healthy participants performed online spelling with three different display methods. In one condition a 5 × 5 letter matrix was presented on a conventional 22 inch TFT monitor. Two configurations of the VR headset were tested. In the first (glasses A), the same 5 × 5 matrix filled the field of view of the user. In the second (glasses B), single letters of the matrix filled the field of view of the user. The participant in the LIS tested the VR headset on three different occasions (glasses A condition only). For healthy participants, average online spelling accuracies were 94% (15.5 bits/min) using three flash sequences for spelling with the monitor and glasses A and 96% (16.2 bits/min) with glasses B. In one session, the participant in the LIS reached an online spelling accuracy of 100% (10 bits/min) using the glasses A condition. We also demonstrated that spelling with one flash sequence is possible with the VR headset for healthy users (mean: 32.1 bits/min, maximum reached by one user: 71.89 bits/min at 100% accuracy). We conclude that the VR headset allows for rapid P300 BCI communication in healthy users and may be a suitable display option for severely

  20. Influence of visual path information on human heading perception during rotation.

    PubMed

    Li, Li; Chen, Jing; Peng, Xiaozhe

    2009-03-31

    How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

  1. Turning Shortcomings into Challenges: Brain-Computer Interfaces for Games

    NASA Astrophysics Data System (ADS)

    Nijholt, Anton; Reuderink, Boris; Oude Bos, Danny

    In recent years we have seen a rising interest in brain-computer interfacing for human-computer interaction and potential game applications. Until now, however, we have almost only seen attempts where BCI is used to measure the affective state of the user or in neurofeedback games. There have hardly been any attempts to design BCI games where BCI is considered to be one of the possible input modalities that can be used to control the game. One reason may be that research still follows the paradigms of the traditional, medically oriented, BCI approaches. In this paper we discuss current BCI research from the viewpoint of games and game design. It is hoped that this survey will make clear that we need to design different games than we used to, but that such games can nevertheless be interesting and exciting.

  2. Evaluation of a wireless wearable tongue–computer interface by individuals with high-level spinal cord injuries

    PubMed Central

    Huo, Xueliang; Ghovanloo, Maysam

    2010-01-01

    The tongue drive system (TDS) is an unobtrusive, minimally invasive, wearable and wireless tongue–computer interface (TCI), which can infer its users' intentions, represented in their volitional tongue movements, by detecting the position of a small permanent magnetic tracer attached to the users' tongues. Any specific tongue movements can be translated into user-defined commands and used to access and control various devices in the users' environments. The latest external TDS (eTDS) prototype is built on a wireless headphone and interfaced to a laptop PC and a powered wheelchair. Using customized sensor signal processing algorithms and graphical user interface, the eTDS performance was evaluated by 13 naive subjects with high-level spinal cord injuries (C2–C5) at the Shepherd Center in Atlanta, GA. Results of the human trial show that an average information transfer rate of 95 bits/min was achieved for computer access with 82% accuracy. This information transfer rate is about two times higher than the EEG-based BCIs that are tested on human subjects. It was also demonstrated that the subjects had immediate and full control over the powered wheelchair to the extent that they were able to perform complex wheelchair navigation tasks, such as driving through an obstacle course. PMID:20332552

  3. The experience of agency in human-computer interactions: a review

    PubMed Central

    Limerick, Hannah; Coyle, David; Moore, James W.

    2014-01-01

    The sense of agency is the experience of controlling both one’s body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied “real-life” situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces. PMID:25191256

  4. [The current state of the brain-computer interface problem].

    PubMed

    Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A

    2015-01-01

    It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.

  5. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  6. Human recognition based on head-shoulder contour extraction and BP neural network

    NASA Astrophysics Data System (ADS)

    Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian

    2014-11-01

    In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.

  7. Eye-head coordination during free exploration in human and cat.

    PubMed

    Einhäuser, Wolfgang; Moeller, Gudrun U; Schumann, Frank; Conradt, Jörg; Vockeroth, Johannes; Bartl, Klaus; Schneider, Erich; König, Peter

    2009-05-01

    Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.

  8. Certification for civil flight decks and the human-computer interface

    NASA Technical Reports Server (NTRS)

    Mcclumpha, Andrew J.; Rudisill, Marianne

    1994-01-01

    This paper will address the issue of human factor aspects of civil flight deck certification, with emphasis on the pilot's interface with automation. In particular, three questions will be asked that relate to this certification process: (1) are the methods, data, and guidelines available from human factors to adequately address the problems of certifying as safe and error tolerant the complex automated systems of modern civil transport aircraft; (2) do aircraft manufacturers effectively apply human factors information during the aircraft flight deck design process; and (3) do regulatory authorities effectively apply human factors information during the aircraft certification process?

  9. Levels of detail analysis of microwave scattering from human head models for brain stroke detection

    PubMed Central

    2017-01-01

    In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to

  10. Virtual interface environment workstations

    NASA Technical Reports Server (NTRS)

    Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.

    1988-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.

  11. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  12. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface.

    PubMed

    Maksimenko, Vladimir A; Runnova, Anastasia E; Zhuravlev, Maksim O; Makarov, Vladimir V; Nedayvozov, Vladimir; Grubov, Vadim V; Pchelintceva, Svetlana V; Hramov, Alexander E; Pisarchik, Alexander N

    2017-01-01

    The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG) allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8-12 Hz) with a simultaneous increase in beta-wave activity (20-30 Hz), whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time.

  13. Brain-computer interfaces in neurological rehabilitation.

    PubMed

    Daly, Janis J; Wolpaw, Jonathan R

    2008-11-01

    Recent advances in analysis of brain signals, training patients to control these signals, and improved computing capabilities have enabled people with severe motor disabilities to use their brain signals for communication and control of objects in their environment, thereby bypassing their impaired neuromuscular system. Non-invasive, electroencephalogram (EEG)-based brain-computer interface (BCI) technologies can be used to control a computer cursor or a limb orthosis, for word processing and accessing the internet, and for other functions such as environmental control or entertainment. By re-establishing some independence, BCI technologies can substantially improve the lives of people with devastating neurological disorders such as advanced amyotrophic lateral sclerosis. BCI technology might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity. Alternatively, by use of brain signals to supplement impaired muscle control, BCIs might increase the efficacy of a rehabilitation protocol and thus improve muscle control for the patient.

  14. On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.

    PubMed

    Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen

    2014-07-01

    To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.

  15. Soft brain-machine interfaces for assistive robotics: A novel control approach.

    PubMed

    Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash

    2017-07-01

    Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.

  16. A Novel Feature Optimization for Wearable Human-Computer Interfaces Using Surface Electromyography Sensors

    PubMed Central

    Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen

    2018-01-01

    The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These

  17. Human factors dimensions in the evolution of increasingly automated control rooms for near-earth satellites

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1982-01-01

    The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.

  18. Command and control interfaces for advanced neuroprosthetic applications.

    PubMed

    Scott, T R; Haugland, M

    2001-10-01

    Command and control interfaces permit the intention and situation of the user to influence the operation of the neural prosthesis. The wishes of the user are communicated via command interfaces to the neural prosthesis and the situation of the user by feedback control interfaces. Both these interfaces have been reviewed separately and are discussed in light of the current state of the art and projections for the future. It is apparent that as system functional complexity increases, the need for simpler command interfaces will increase. Such systems will demand more information to function effectively in order not to unreasonably increase user attention overhead. This will increase the need for bioelectric and biomechanical signals in a comprehensible form via elegant feedback control interfaces. Implementing such systems will also increase the computational demand on such neural prostheses.

  19. Collaborative Brain-Computer Interface for Aiding Decision-Making

    PubMed Central

    Poli, Riccardo; Valeriani, Davide; Cinel, Caterina

    2014-01-01

    We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making. PMID:25072739

  20. Non-invasive brain-computer interface system: towards its application as assistive technology.

    PubMed

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2008-04-15

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain-computer interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI.

  1. Brain-Computer Interface with Inhibitory Neurons Reveals Subtype-Specific Strategies.

    PubMed

    Mitani, Akinori; Dong, Mingyuan; Komiyama, Takaki

    2018-01-08

    Brain-computer interfaces have seen an increase in popularity due to their potential for direct neuroprosthetic applications for amputees and disabled individuals. Supporting this promise, animals-including humans-can learn even arbitrary mapping between the activity of cortical neurons and movement of prosthetic devices [1-4]. However, the performance of neuroprosthetic device control has been nowhere near that of limb control in healthy individuals, presenting a dire need to improve the performance. One potential limitation is the fact that previous work has not distinguished diverse cell types in the neocortex, even though different cell types possess distinct functions in cortical computations [5-7] and likely distinct capacities to control brain-computer interfaces. Here, we made a first step in addressing this issue by tracking the plastic changes of three major types of cortical inhibitory neurons (INs) during a neuron-pair operant conditioning task using two-photon imaging of IN subtypes expressing GCaMP6f. Mice were rewarded when the activity of the positive target neuron (N+) exceeded that of the negative target neuron (N-) beyond a set threshold. Mice improved performance with all subtypes, but the strategies were subtype specific. When parvalbumin (PV)-expressing INs were targeted, the activity of N- decreased. However, targeting of somatostatin (SOM)- and vasoactive intestinal peptide (VIP)-expressing INs led to an increase of the N+ activity. These results demonstrate that INs can be individually modulated in a subtype-specific manner and highlight the versatility of neural circuits in adapting to new demands by using cell-type-specific strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Near infrared spectroscopy based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Ranganatha, Sitaram; Hoshi, Yoko; Guan, Cuntai

    2005-04-01

    A brain-computer interface (BCI) provides users with an alternative output channel other than the normal output path of the brain. BCI is being given much attention recently as an alternate mode of communication and control for the disabled, such as patients suffering from Amyotrophic Lateral Sclerosis (ALS) or "locked-in". BCI may also find applications in military, education and entertainment. Most of the existing BCI systems which rely on the brain's electrical activity use scalp EEG signals. The scalp EEG is an inherently noisy and non-linear signal. The signal is detrimentally affected by various artifacts such as the EOG, EMG, ECG and so forth. EEG is cumbersome to use in practice, because of the need for applying conductive gel, and the need for the subject to be immobile. There is an urgent need for a more accessible interface that uses a more direct measure of cognitive function to control an output device. The optical response of Near Infrared Spectroscopy (NIRS) denoting brain activation can be used as an alternative to electrical signals, with the intention of developing a more practical and user-friendly BCI. In this paper, a new method of brain-computer interface (BCI) based on NIRS is proposed. Preliminary results of our experiments towards developing this system are reported.

  3. Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.

    ERIC Educational Resources Information Center

    Deaudelin, Colette; Dussault, Marc; Brodeur, Monique

    2003-01-01

    Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…

  4. Control of a nursing bed based on a hybrid brain-computer interface.

    PubMed

    Nengneng Peng; Rui Zhang; Haihua Zeng; Fei Wang; Kai Li; Yuanqing Li; Xiaobin Zhuang

    2016-08-01

    In this paper, we propose an intelligent nursing bed system which is controlled by a hybrid brain-computer interface (BCI) involving steady-state visual evoked potential (SSVEP) and P300. Specifically, the hybrid BCI includes an asynchronous brain switch based on SSVEP and P300, and a P300-based BCI. The brain switch is used to turn on/off the control system of the electric nursing bed through idle/control state detection, whereas the P300-based BCI is for operating the nursing bed. At the beginning, the user may focus on one group of flashing buttons in the graphic user interface (GUI) of the brain switch, which can simultaneously evoke SSVEP and P300, to switch on the control system. Here, the combination of SSVEP and P300 is used for improving the performance of the brain switch. Next, the user can control the nursing bed using the P300-based BCI. The GUI of the P300-based BCI includes 10 flashing buttons, which correspond to 10 functional operations, namely, left-side up, left-side down, back up, back down, bedpan open, bedpan close, legs up, legs down, right-side up, and right-side down. For instance, he/she can focus on the flashing button "back up" in the GUI of the P300-based BCI to activate the corresponding control such that the nursing bed is adjusted up. Eight healthy subjects participated in our experiment, and obtained an average accuracy of 93.75% and an average false positive rate (FPR) of 0.15 event/min. The effectiveness of our system was thus demonstrated.

  5. Design of a Mobile Brain Computer Interface-Based Smart Multimedia Controller

    PubMed Central

    Tseng, Kevin C.; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-01-01

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user’s physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user’s EEG feature and select music according his/her state. The relationship between the user’s state and music sorted by listener’s preference was also examined in this study. The experimental results show that real-time music biofeedback according a user’s EEG feature may positively improve the user’s attention state. PMID:25756862

  6. Pigeons (C. livia) Follow Their Head during Turning Flight: Head Stabilization Underlies the Visual Control of Flight.

    PubMed

    Ros, Ivo G; Biewener, Andrew A

    2017-01-01

    Similar flight control principles operate across insect and vertebrate fliers. These principles indicate that robust solutions have evolved to meet complex behavioral challenges. Following from studies of visual and cervical feedback control of flight in insects, we investigate the role of head stabilization in providing feedback cues for controlling turning flight in pigeons. Based on previous observations that the eyes of pigeons remain at relatively fixed orientations within the head during flight, we test potential sensory control inputs derived from head and body movements during 90° aerial turns. We observe that periods of angular head stabilization alternate with rapid head repositioning movements (head saccades), and confirm that control of head motion is decoupled from aerodynamic and inertial forces acting on the bird's continuously rotating body during turning flapping flight. Visual cues inferred from head saccades correlate with changes in flight trajectory; whereas the magnitude of neck bending predicts angular changes in body position. The control of head motion to stabilize a pigeon's gaze may therefore facilitate extraction of important motion cues, in addition to offering mechanisms for controlling body and wing movements. Strong similarities between the sensory flight control of birds and insects may also inspire novel designs of robust controllers for human-engineered autonomous aerial vehicles.

  7. Pigeons (C. livia) Follow Their Head during Turning Flight: Head Stabilization Underlies the Visual Control of Flight

    PubMed Central

    Ros, Ivo G.; Biewener, Andrew A.

    2017-01-01

    Similar flight control principles operate across insect and vertebrate fliers. These principles indicate that robust solutions have evolved to meet complex behavioral challenges. Following from studies of visual and cervical feedback control of flight in insects, we investigate the role of head stabilization in providing feedback cues for controlling turning flight in pigeons. Based on previous observations that the eyes of pigeons remain at relatively fixed orientations within the head during flight, we test potential sensory control inputs derived from head and body movements during 90° aerial turns. We observe that periods of angular head stabilization alternate with rapid head repositioning movements (head saccades), and confirm that control of head motion is decoupled from aerodynamic and inertial forces acting on the bird's continuously rotating body during turning flapping flight. Visual cues inferred from head saccades correlate with changes in flight trajectory; whereas the magnitude of neck bending predicts angular changes in body position. The control of head motion to stabilize a pigeon's gaze may therefore facilitate extraction of important motion cues, in addition to offering mechanisms for controlling body and wing movements. Strong similarities between the sensory flight control of birds and insects may also inspire novel designs of robust controllers for human-engineered autonomous aerial vehicles. PMID:29249929

  8. The development of the cucullaris muscle and the branchial musculature in the Longnose Gar, (Lepisosteus osseus, Lepisosteiformes, Actinopterygii) and its implications for the evolution and development of the head/trunk interface in vertebrates.

    PubMed

    Naumann, Benjamin; Warth, Peter; Olsson, Lennart; Konstantinidis, Peter

    2017-11-01

    The vertebrate head/trunk interface is the region of the body where the different developmental programs of the head and trunk come in contact. Many anatomical structures that develop in this transition zone differ from similar structures in the head or the trunk. This is best exemplified by the cucullaris/trapezius muscle, spanning the head/trunk interface by connecting the head to the pectoral girdle. The source of this muscle has been claimed to be either the unsegmented head mesoderm or the somites of the trunk. However most recent data on the development of the cucullaris muscle are derived from tetrapods and information from actinopterygian taxa is scarce. We used classical histology in combination with fluorescent whole-mount antibody staining and micro-computed tomography to investigate the developmental pattern of the cucullaris and the branchial muscles in a basal actinopterygian, the Longnose gar (Lepisosteus osseus). Our results show (1) that the cucullaris has been misidentified in earlier studies on its development in Lepisosteus. (2) Cucullaris development is delayed compared to other head and trunk muscles. (3) This developmental pattern of the cucullaris is similar to that reported from some tetrapod taxa. (4) That the retractor dorsalis muscle of L. osseus shows a delayed developmental pattern similar to the cucullaris. Our data are in agreement with an explanatory scenario for the cucullaris development in tetrapods, suggesting that these mechanisms are conserved throughout the Osteichthyes. Furthermore the developmental pattern of the retractor dorsalis, also spanning the head/trunk interface, seems to be controlled by similar mechanisms. © 2017 Wiley Periodicals, Inc.

  9. Interfacing sensory input with motor output: does the control architecture converge to a serial process along a single channel?

    PubMed Central

    van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.

    2013-01-01

    Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342

  10. Probabilistic co-adaptive brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing

  11. Human and Computer Control of Undersea Teleoperators

    DTIC Science & Technology

    1978-07-15

    j ’Al.• /i•.f IAII•lU•I .p.ra i- . AWL• u~/K ftI&i. . .................... L HUMAN AND COMPUTER CONTROL / OF UNDERSEA TELEOPERATORS, .9 . - - I... UNDERSEA TELEOPERATORS 15 Mar 1977-14 June 1978 6. PERFORMING ORO. REPORT NUMBER,- J ". AUTHOR(e) S. CONTRACT OR GRANT NUMBER(*) Thomas B. Sheridan and...y ad Identify by block >w his is a review of factors p ertaining to man-machine interaction in remote control of undersea vehicles, especially their

  12. A Brain-Computer Interface (BCI) system to use arbitrary Windows applications by directly controlling mouse and keyboard.

    PubMed

    Spuler, Martin

    2015-08-01

    A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.

  13. Are we there yet? Evaluating commercial grade brain-computer interface for control of computer applications by individuals with cerebral palsy.

    PubMed

    Taherian, Sarvnaz; Selitskiy, Dmitry; Pau, James; Claire Davies, T

    2017-02-01

    Using a commercial electroencephalography (EEG)-based brain-computer interface (BCI), the training and testing protocol for six individuals with spastic quadriplegic cerebral palsy (GMFCS and MACS IV and V) was evaluated. A customised, gamified training paradigm was employed. Over three weeks, the participants spent two sessions exploring the system, and up to six sessions playing the game which focussed on EEG feedback of left and right arm motor imagery. The participants showed variable inconclusive results in the ability to produce two distinct EEG patterns. Participant performance was influenced by physical illness, motivation, fatigue and concentration. The results from this case study highlight the infancy of BCIs as a form of assistive technology for people with cerebral palsy. Existing commercial BCIs are not designed according to the needs of end-users. Implications for Rehabilitation Mood, fatigue, physical illness and motivation influence the usability of a brain-computer interface. Commercial brain-computer interfaces are not designed for practical assistive technology use for people with cerebral palsy. Practical brain-computer interface assistive technologies may need to be flexible to suit individual needs.

  14. Towards Effective Non-Invasive Brain-Computer Interfaces Dedicated to Gait Rehabilitation Systems

    PubMed Central

    Castermans, Thierry; Duvinage, Matthieu; Cheron, Guy; Dutoit, Thierry

    2014-01-01

    In the last few years, significant progress has been made in the field of walk rehabilitation. Motor cortex signals in bipedal monkeys have been interpreted to predict walk kinematics. Epidural electrical stimulation in rats and in one young paraplegic has been realized to partially restore motor control after spinal cord injury. However, these experimental trials are far from being applicable to all patients suffering from motor impairments. Therefore, it is thought that more simple rehabilitation systems are desirable in the meanwhile. The goal of this review is to describe and summarize the progress made in the development of non-invasive brain-computer interfaces dedicated to motor rehabilitation systems. In the first part, the main principles of human locomotion control are presented. The paper then focuses on the mechanisms of supra-spinal centers active during gait, including results from electroencephalography, functional brain imaging technologies [near-infrared spectroscopy (NIRS), functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), single-photon emission-computed tomography (SPECT)] and invasive studies. The first brain-computer interface (BCI) applications to gait rehabilitation are then presented, with a discussion about the different strategies developed in the field. The challenges to raise for future systems are identified and discussed. Finally, we present some proposals to address these challenges, in order to contribute to the improvement of BCI for gait rehabilitation. PMID:24961699

  15. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    ERIC Educational Resources Information Center

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  16. The use of graphics in the design of the human-telerobot interface

    NASA Technical Reports Server (NTRS)

    Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    The Man-Systems Telerobotics Laboratory (MSTL) of NASA's Johnson Space Center employs computer graphics tools in their design and evaluation of the Flight Telerobotic Servicer (FTS) human/telerobot interface on the Shuttle and on the Space Station. It has been determined by the MSTL that the use of computer graphics can promote more expedient and less costly design endeavors. Several specific examples of computer graphics applied to the FTS user interface by the MSTL are described.

  17. An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.

    1994-01-01

    An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.

  18. Design and development of data glove based on printed polymeric sensors and Zigbee networks for Human-Computer Interface.

    PubMed

    Tongrod, Nattapong; Lokavee, Shongpun; Watthanawisuth, Natthapol; Tuantranont, Adisorn; Kerdcharoen, Teerakiat

    2013-03-01

    Current trends in Human-Computer Interface (HCI) have brought on a wave of new consumer devices that can track the motion of our hands. These devices have enabled more natural interfaces with computer applications. Data gloves are commonly used as input devices, equipped with sensors that detect the movements of hands and communication unit that interfaces those movements with a computer. Unfortunately, the high cost of sensor technology inevitably puts some burden to most general users. In this research, we have proposed a low-cost data glove concept based on printed polymeric sensor to make pressure and bending sensors fabricated by a consumer ink-jet printer. These sensors were realized using a conductive polymer (poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) [PEDOT:PSS]) thin film printed on glossy photo paper. Performance of these sensors can be enhanced by addition of dimethyl sulfoxide (DMSO) into the aqueous dispersion of PEDOT:PSS. The concept of surface resistance was successfully adopted for the design and fabrication of sensors. To demonstrate the printed sensors, we constructed a data glove using such sensors and developed software for real time hand tracking. Wireless networks based on low-cost Zigbee technology were used to transfer data from the glove to a computer. To our knowledge, this is the first report on low cost data glove based on paper pressure sensors. This low cost implementation of both sensors and communication network as proposed in this paper should pave the way toward a widespread implementation of data glove for real-time hand tracking applications.

  19. Nintendo Wii remote controllers for head posture measurement: accuracy, validity, and reliability of the infrared optical head tracker.

    PubMed

    Kim, Jongshin; Nam, Kyoung Won; Jang, Ik Gyu; Yang, Hee Kyung; Kim, Kwang Gi; Hwang, Jeong-Min

    2012-03-15

    To evaluate the accuracy, validity, and reliability of a newly developed infrared optical head tracker (IOHT) using Nintendo Wii remote controllers (WiiMote; Nintendo Co. Ltd., Kyoto, Japan) for measurement of the angle of head posture. The IOHT consists of two infrared (IR) receivers (WiiMote) that are fixed to a mechanical frame and connected to a monitoring computer via a Bluetooth communication channel and an IR beacon that consists of four IR light-emitting diodes (LEDs). With the use of the Cervical Range of Motion (CROM; Performance Attainment Associates, St. Paul, MN) as a reference, one- and three-dimensional (1- and 3-D) head postures of 20 normal adult subjects (20-37 years of age; 9 women and 11 men) were recorded with the IOHT. In comparison with the data from the CROM, the IOHT-derived results showed high consistency. The measurements of 1- and 3-D positions of the human head with the IOHT were very close to those of the CROM. The correlation coefficients of 1- and 3-D positions between the IOHT and the CROM were more than 0.99 and 0.96 (P < 0.05, Pearson's correlation test), respectively. Reliability tests of the IOHT for the normal adult subjects for 1- and 3-D positions of the human head had 95% limits of agreement angles of approximately ±4.5° and ±8.0°, respectively. The IOHT showed strong concordance with the CROM and relatively good test-retest reliability, thus proving its validity and reliability as a head-posture-measuring device. Considering its high performance, ease of use, and low cost, the IOHT has the potential to be widely used as a head-posture-measuring device in clinical practice.

  20. EOG-sEMG Human Interface for Communication

    PubMed Central

    Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi

    2016-01-01

    The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as “dual-modality” for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%. PMID:27418924

  1. EOG-sEMG Human Interface for Communication.

    PubMed

    Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi

    2016-01-01

    The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as "dual-modality" for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%.

  2. Development of the Computer Interface Literacy Measure.

    ERIC Educational Resources Information Center

    Turner, G. Marc; Sweany, Noelle Wall; Husman, Jenefer

    2000-01-01

    Discussion of computer literacy and the rapidly changing face of technology focuses on a study that redefined computer literacy to include competencies for using graphical user interfaces for operating systems, hypermedia applications, and the Internet. Describes the development and testing of the Computer Interface Literacy Measure with…

  3. CSI computer system/remote interface unit acceptance test results

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  4. On the Rhetorical Contract in Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Wenger, Michael J.

    1991-01-01

    An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…

  5. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  6. Brain computer interface for operating a robot

    NASA Astrophysics Data System (ADS)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  7. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface

    PubMed Central

    Zhuravlev, Maksim O.; Makarov, Vladimir V.; Nedayvozov, Vladimir; Grubov, Vadim V.; Pchelintceva, Svetlana V.; Hramov, Alexander E.

    2017-01-01

    The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG) allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8–12 Hz) with a simultaneous increase in beta-wave activity (20–30 Hz), whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time. PMID:29267295

  8. Towards Better Human Robot Interaction: Understand Human Computer Interaction in Social Gaming Using a Video-Enhanced Diary Method

    NASA Astrophysics Data System (ADS)

    See, Swee Lan; Tan, Mitchell; Looi, Qin En

    This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.

  9. Brain-Computer Interfaces Using Sensorimotor Rhythms: Current State and Future Perspectives

    PubMed Central

    Yuan, Han; He, Bin

    2014-01-01

    Many studies over the past two decades have shown that people can use brain signals to convey their intent to a computer using brain-computer interfaces (BCIs). BCI systems extract specific features of brain activity and translate them into control signals that drive an output. Recently, a category of BCIs that are built on the rhythmic activity recorded over the sensorimotor cortex, i.e. the sensorimotor rhythm (SMR), has attracted considerable attention among the BCIs that use noninvasive neural recordings, e.g. electroencephalography (EEG), and have demonstrated the capability of multi-dimensional prosthesis control. This article reviews the current state and future perspectives of SMR-based BCI and its clinical applications, in particular focusing on the EEG SMR. The characteristic features of SMR from the human brain are described and their underlying neural sources are discussed. The functional components of SMR-based BCI, together with its current clinical applications are reviewed. Lastly, limitations of SMR-BCIs and future outlooks are also discussed. PMID:24759276

  10. Non invasive Brain-Computer Interface system: towards its application as assistive technology

    PubMed Central

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2010-01-01

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain Computer Interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI. PMID:18394526

  11. [A wireless smart home system based on brain-computer interface of steady state visual evoked potential].

    PubMed

    Zhao, Li; Xing, Xiao; Guo, Xuhong; Liu, Zehua; He, Yang

    2014-10-01

    Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.

  12. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  13. A Collaborative Brain-Computer Interface for Improving Human Performance

    PubMed Central

    Wang, Yijun; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG) based brain-computer interfaces (BCI) have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1) Event-related potentials (ERP) averaging, (2) Feature concatenating, and (3) Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right) was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100–250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC), which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior. PMID:21655253

  14. Effects of Home and School Computer Use on School Readiness and Cognitive Development among Head Start Children: A Randomized Controlled Pilot Trial

    ERIC Educational Resources Information Center

    Li, Xiaoming; Atkins, Melissa S.; Stanton, Bonita

    2006-01-01

    Data from 122 Head Start children were analyzed to examine the impact of computer use on school readiness and psychomotor skills. Children in the experimental group were given the opportunity to work on a computer for 15-20 minutes per day with their choice of developmentally appropriate educational software, while the control group received a…

  15. Cursor control by Kalman filter with a non-invasive body–machine interface

    PubMed Central

    Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A

    2015-01-01

    Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561

  16. Issues in human/computer control of dexterous remote hands

    NASA Technical Reports Server (NTRS)

    Salisbury, K.

    1987-01-01

    Much research on dexterous robot hands has been aimed at the design and control problems associated with their autonomous operation, while relatively little research has addressed the problem of direct human control. It is likely that these two modes can be combined in a complementary manner yielding more capability than either alone could provide. While many of the issues in mixed computer/human control of dexterous hands parallel those found in supervisory control of traditional remote manipulators, the unique geometry and capabilities of dexterous hands pose many new problems. Among these are the control of redundant degrees of freedom, grasp stabilization and specification of non-anthropomorphic behavior. An overview is given of progress made at the MIT AI Laboratory in control of the Salisbury 3 finger hand, including experiments in grasp planning and manipulation via controlled slip. It is also suggested how we might introduce human control into the process at a variety of functional levels.

  17. Biosensor Technologies for Augmented Brain-Computer Interfaces in the Next Decades

    DTIC Science & Technology

    2012-05-13

    Research Triangle Park, NC 27709-2211 Augmented brain–computer interface (ABCI);biosensor; cognitive-state monitoring; electroencephalogram( EEG ); human...biosensor; cognitive-state monitoring; electroencephalogram ( EEG ); human brain imaging Manuscript received November 28, 2011; accepted December 20...magnetic reso- nance imaging (fMRI) [1], positron emission tomography (PET) [2], electroencephalograms ( EEGs ) and optical brain imaging techniques (i.e

  18. Haptic interfaces: Hardware, software and human performance

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandayam A.

    1995-01-01

    Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.

  19. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    PubMed

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  20. High density tape/head interface study

    NASA Technical Reports Server (NTRS)

    Csengery, L. C.

    1983-01-01

    The high energy (H sub c approximately or = to 650 oersteds) tapes and high track density (84 tracks per inch) heads investigated had, as its goal, the definition of optimum combinations of head and tape, including the control required of their interfacial dynamics that would enable the manufacture of high rate (150 Mbps) digital tape recorders for unattended space flight.

  1. Ethics in published brain-computer interface research

    NASA Astrophysics Data System (ADS)

    Specker Sullivan, L.; Illes, J.

    2018-02-01

    Objective. Sophisticated signal processing has opened the doors to more research with human subjects than ever before. The increase in the use of human subjects in research comes with a need for increased human subjects protections. Approach. We quantified the presence or absence of ethics language in published reports of brain-computer interface (BCI) studies that involved human subjects and qualitatively characterized ethics statements. Main results. Reports of BCI studies with human subjects that are published in neural engineering and engineering journals are anchored in the rationale of technological improvement. Ethics language is markedly absent, omitted from 31% of studies published in neural engineering journals and 59% of studies in biomedical engineering journals. Significance. As the integration of technological tools with the capacities of the mind deepens, explicit attention to ethical issues will ensure that broad human benefit is embraced and not eclipsed by technological exclusiveness.

  2. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    PubMed

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  3. Design of a Computer-Controlled, Random-Access Slide Projector Interface. Final Report (April 1974 - November 1974).

    ERIC Educational Resources Information Center

    Kirby, Paul J.; And Others

    The design, development, test, and evaluation of an electronic hardware device interfacing a commercially available slide projector with a plasma panel computer terminal is reported. The interface device allows an instructional computer program to select slides for viewing based upon the lesson student situation parameters of the instructional…

  4. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  5. Human factors optimization of virtual environment attributes for a space telerobotic control station

    NASA Astrophysics Data System (ADS)

    Lane, Jason Corde

    2000-10-01

    Remote control of underwater vehicles and other robotic systems has, up until now, proved to be a challenging task for the human operator. With technology advancements in computers and displays, computer interfaces can be used to alleviate the workload on the operator. This research introduces the concept of a commanded display, which is a graphical simulation that shows the commands sent to the actual system in real-time. The primary goal of this research was to show a commanded display as an alternative to the traditional predictive display for reducing the effects of time delay. Several experiments were used to investigate how subjects compensated for time delay under a variety of conditions while controlling a 7-degree of freedom robotic manipulator. Results indicate that time delay increased completion time linearly; this linear relationship occurred even at different manipulator speeds, varying levels of error, and when using a commanded display. The commanded display alleviated the majority of time delay effects, up to 91% reduction. The commanded display also facilitated more accurate control, reducing the number of inadvertent impacts to the task worksite, even when compared to no time delay. Even with a moderate error between the commanded and actual displays, the commanded display was still a useful tool for mitigating time delay. The way subjects controlled the manipulator with the input device was tracked and their control strategies were extracted. A correlation between the subjects' use of the input device and their task completion time was determined. The importance of stereo vision and head tracking was examined and shown to improve a subject's depth perception within a virtual environment. Reports of simulator sickness induced by display equipment, including a head mounted display and LCD shutter glasses, were compared. The results of the above testing were used to develop an effective virtual environment control station to control a multi

  6. Assessment of mechanical properties of human head tissues for trauma modelling.

    PubMed

    Lozano-Mínguez, Estívaliz; Palomar, Marta; Infante-García, Diego; Rupérez, María José; Giner, Eugenio

    2018-05-01

    Many discrepancies are found in the literature regarding the damage and constitutive models for head tissues as well as the values of the constants involved in the constitutive equations. Their proper definition is required for consistent numerical model performance when predicting human head behaviour, and hence skull fracture and brain damage. The objective of this research is to perform a critical review of constitutive models and damage indicators describing human head tissue response under impact loading. A 3D finite element human head model has been generated by using computed tomography images, which has been validated through the comparison to experimental data in the literature. The threshold values of the skull and the scalp that lead to fracture have been analysed. We conclude that (1) compact bone properties are critical in skull fracture, (2) the elastic constants of the cerebrospinal fluid affect the intracranial pressure distribution, and (3) the consideration of brain tissue as a nearly incompressible solid with a high (but not complete) water content offers pressure responses consistent with the experimental data. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Designing Guiding Systems for Brain-Computer Interfaces

    PubMed Central

    Kosmyna, Nataliya; Lécuyer, Anatole

    2017-01-01

    Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400

  8. Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia

    1996-01-01

    The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.

  9. A visual interface to computer programs for linkage analysis.

    PubMed

    Chapman, C J

    1990-06-01

    This paper describes a visual approach to the input of information about human families into computer data bases, making use of the GEM graphic interface on the Atari ST. Similar approaches could be used on the Apple Macintosh or on the IBM PC AT (to which it has been transferred). For occasional users of pedigree analysis programs, this approach has considerable advantages in ease of use and accessibility. An example of such use might be the analysis of risk in families with Huntington disease using linked RFLPs. However, graphic interfaces do make much greater demands on the programmers of these systems.

  10. Interfacing computers and the internet with your allergy practice.

    PubMed

    Bernstein, Jonathan A

    2004-10-01

    Computers and the internet have begun to play a prominent role in the medical profession and, in particular, the allergy specialty. Computer technology is being used more frequently for patient and physician education, asthma management in children and adults, including environmental control, generating patient databases for research and clinical practice and in marketing and e-commerce. This article will review how computers and the internet have begun to interface with the allergy subspecialty practice in these various areas.

  11. Brain computer interface to enhance episodic memory in human participants

    PubMed Central

    Burke, John F.; Merkow, Maxwell B.; Jacobs, Joshua; Kahana, Michael J.

    2015-01-01

    Recent research has revealed that neural oscillations in the theta (4–8 Hz) and alpha (9–14 Hz) bands are predictive of future success in memory encoding. Because these signals occur before the presentation of an upcoming stimulus, they are considered stimulus-independent in that they correlate with enhanced memory encoding independent of the item being encoded. Thus, such stimulus-independent activity has important implications for the neural mechanisms underlying episodic memory as well as the development of cognitive neural prosthetics. Here, we developed a brain computer interface (BCI) to test the ability of such pre-stimulus activity to modulate subsequent memory encoding. We recorded intracranial electroencephalography (iEEG) in neurosurgical patients as they performed a free recall memory task, and detected iEEG theta and alpha oscillations that correlated with optimal memory encoding. We then used these detected oscillatory changes to trigger the presentation of items in the free recall task. We found that item presentation contingent upon the presence of pre-stimulus theta and alpha oscillations modulated memory performance in more sessions than expected by chance. Our results suggest that an electrophysiological signal may be causally linked to a specific behavioral condition, and contingent stimulus presentation has the potential to modulate human memory encoding. PMID:25653605

  12. Interface design and human factors considerations for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.

  13. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  14. Neurobionics and the brain-computer interface: current applications and future horizons.

    PubMed

    Rosenfeld, Jeffrey V; Wong, Yan Tat

    2017-05-01

    The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.

  15. High-resolution EEG techniques for brain-computer interface applications.

    PubMed

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio

    2008-01-15

    High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a

  16. Human Factors Guidance for Control Room and Digital Human-System Interface Design and Modification, Guidelines for Planning, Specification, Design, Licensing, Implementation, Training, Operation and Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Fink, D. Hill, J. O'Hara

    2004-11-30

    Nuclear plant operators face a significant challenge designing and modifying control rooms. This report provides guidance on planning, designing, implementing and operating modernized control rooms and digital human-system interfaces.

  17. A comparative study: use of a Brain-computer Interface (BCI) device by people with cerebral palsy in interaction with computers.

    PubMed

    Heidrich, Regina O; Jensen, Emely; Rebelo, Francisco; Oliveira, Tiago

    2015-01-01

    This article presents a comparative study among people with cerebral palsy and healthy controls, of various ages, using a Brain-computer Interface (BCI) device. The research is qualitative in its approach. Researchers worked with Observational Case Studies. People with cerebral palsy and healthy controls were evaluated in Portugal and in Brazil. The study aimed to develop a study for product evaluation in order to perceive whether people with cerebral palsy could interact with the computer and compare whether their performance is similar to that of healthy controls when using the Brain-computer Interface. Ultimately, it was found that there are no significant differences between people with cerebral palsy in the two countries, as well as between populations without cerebral palsy (healthy controls).

  18. Virtual interface environment

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1986-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  19. A HUMAN FACTORS ENGINEERING PROCESS TO SUPPORT HUMAN-SYSTEM INTERFACE DESIGN IN CONTROL ROOM MODERNIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovesdi, C.; Joe, J.; Boring, R.

    The primary objective of the United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to sustain operation of the existing commercial nuclear power plants (NPPs) through a multi-pathway approach in conducting research and development (R&D). The Advanced Instrumentation, Information, and Control (II&C) System Technologies pathway conducts targeted R&D to address aging and reliability concerns with legacy instrumentation and control (I&C) and other information systems in existing U.S. NPPs. Control room modernization is an important part following this pathway, and human factors experts at Idaho National Laboratory (INL) have been involved in conducting R&D to supportmore » migration of new digital main control room (MCR) technologies from legacy analog and legacy digital I&C. This paper describes a human factors engineering (HFE) process that supports human-system interface (HSI) design in MCR modernization activities, particularly with migration of old digital to new digital I&C. The process described in this work is an expansion from the LWRS Report INL/EXT-16-38576, and is a requirements-driven approach that aligns with NUREG-0711 requirements. The work described builds upon the existing literature by adding more detail around key tasks and decisions to make when transitioning from HSI Design into Verification and Validation (V&V). The overall objective of this process is to inform HSI design and elicit specific, measurable, and achievable human factors criteria for new digital technologies. Upon following this process, utilities should have greater confidence with transitioning from HSI design into V&V.« less

  20. Eye Tracking and Head Movement Detection: A State-of-Art Survey

    PubMed Central

    2013-01-01

    Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851

  1. A Direct Brain-to-Brain Interface in Humans

    PubMed Central

    Rao, Rajesh P. N.; Stocco, Andrea; Bryan, Matthew; Sarma, Devapratim; Youngquist, Tiffany M.; Wu, Joseph; Prat, Chantel S.

    2014-01-01

    We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the “sender”) and transmits this information over the internet to the motor cortex region of a second subject (the “receiver”). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender’s signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means. PMID:25372285

  2. A computer simulation approach to measurement of human control strategy

    NASA Technical Reports Server (NTRS)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  3. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  4. Eye-gaze determination of user intent at the computer interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-12-31

    Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less

  5. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  6. Neuroengineering tools/applications for bidirectional interfaces, brain-computer interfaces, and neuroprosthetic implants - a review of recent progress.

    PubMed

    Rothschild, Ryan Mark

    2010-01-01

    The main focus of this review is to provide a holistic amalgamated overview of the most recent human in vivo techniques for implementing brain-computer interfaces (BCIs), bidirectional interfaces, and neuroprosthetics. Neuroengineering is providing new methods for tackling current difficulties; however neuroprosthetics have been studied for decades. Recent progresses are permitting the design of better systems with higher accuracies, repeatability, and system robustness. Bidirectional interfaces integrate recording and the relaying of information from and to the brain for the development of BCIs. The concepts of non-invasive and invasive recording of brain activity are introduced. This includes classical and innovative techniques like electroencephalography and near-infrared spectroscopy. Then the problem of gliosis and solutions for (semi-) permanent implant biocompatibility such as innovative implant coatings, materials, and shapes are discussed. Implant power and the transmission of their data through implanted pulse generators and wireless telemetry are taken into account. How sensation can be relayed back to the brain to increase integration of the neuroengineered systems with the body by methods such as micro-stimulation and transcranial magnetic stimulation are then addressed. The neuroprosthetic section discusses some of the various types and how they operate. Visual prosthetics are discussed and the three types, dependant on implant location, are examined. Auditory prosthetics, being cochlear or cortical, are then addressed. Replacement hand and limb prosthetics are then considered. These are followed by sections concentrating on the control of wheelchairs, computers and robotics directly from brain activity as recorded by non-invasive and invasive techniques.

  7. Agent-Based Intelligent Interface for Wheelchair Movement Control

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.

    2018-01-01

    People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603

  8. A self-paced motor imagery based brain-computer interface for robotic wheelchair control.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng

    2011-10-01

    This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.

  9. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  10. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    NASA Astrophysics Data System (ADS)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  11. Virtual head rotation reveals a process of route reconstruction from human vestibular signals

    PubMed Central

    Day, Brian L; Fitzpatrick, Richard C

    2005-01-01

    The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled. PMID:16002439

  12. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons.

    PubMed

    N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo

    2017-08-01

    Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.

  13. Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control.

    PubMed

    Miller, Christopher A; Parasuraman, Raja

    2007-02-01

    To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.

  14. Interface Design and Human Factors Considerations for Model-Based Tight Glycemic Control in Critical Care

    PubMed Central

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330

  15. Ownership and Agency of an Independent Supernumerary Hand Induced by an Imitation Brain-Computer Interface.

    PubMed

    Bashford, Luke; Mehring, Carsten

    2016-01-01

    To study body ownership and control, illusions that elicit these feelings in non-body objects are widely used. Classically introduced with the Rubber Hand Illusion, these illusions have been replicated more recently in virtual reality and by using brain-computer interfaces. Traditionally these illusions investigate the replacement of a body part by an artificial counterpart, however as brain-computer interface research develops it offers us the possibility to explore the case where non-body objects are controlled in addition to movements of our own limbs. Therefore we propose a new illusion designed to test the feeling of ownership and control of an independent supernumerary hand. Subjects are under the impression they control a virtual reality hand via a brain-computer interface, but in reality there is no causal connection between brain activity and virtual hand movement but correct movements are observed with 80% probability. These imitation brain-computer interface trials are interspersed with movements in both the subjects' real hands, which are in view throughout the experiment. We show that subjects develop strong feelings of ownership and control over the third hand, despite only receiving visual feedback with no causal link to the actual brain signals. Our illusion is crucially different from previously reported studies as we demonstrate independent ownership and control of the third hand without loss of ownership in the real hands.

  16. Concept of software interface for BCI systems

    NASA Astrophysics Data System (ADS)

    Svejda, Jaromir; Zak, Roman; Jasek, Roman

    2016-06-01

    Brain Computer Interface (BCI) technology is intended to control external system by brain activity. One of main part of such system is software interface, which carries about clear communication between brain and either computer or additional devices connected to computer. This paper is organized as follows. Firstly, current knowledge about human brain is briefly summarized to points out its complexity. Secondly, there is described a concept of BCI system, which is then used to build an architecture of proposed software interface. Finally, there are mentioned disadvantages of sensing technology discovered during sensing part of our research.

  17. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  18. Temperature and melt solid interface control during crystal growth

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1990-01-01

    Findings on the adaptive control of a transparent Bridgman crystal growth furnace are summarized. The task of the process controller is to establish a user specified axial temperature profile by controlling the temperatures in eight heating zones. The furnace controller is built around a computer. Adaptive PID (Proportional Integral Derivative) and Pole Placement control algorithms are applied. The need for adaptive controller stems from the fact that the zone dynamics changes with respect to time. The controller was tested extensively on the Lead Bromide crystal growth. Several different temperature profiles and ampoule's translational rates are tried. The feasibility of solid liquid interface quantification by image processing was determined. The interface is observed by a color video camera and the image data file is processed to determine if the interface is flat, convex or concave.

  19. Comparison of electromyography and force as interfaces for prosthetic control.

    PubMed

    Corbett, Elaine A; Perreault, Eric J; Kuiken, Todd A

    2011-01-01

    The ease with which persons with upper-limb amputations can control their powered prostheses is largely determined by the efficacy of the user command interface. One needs to understand the abilities of the human operator regarding the different available options. Electromyography (EMG) is widely used to control powered upper-limb prostheses. It is an indirect estimator of muscle force and may be expected to limit the control capabilities of the prosthesis user. This study compared EMG control with force control, an interface that is used in everyday interactions with the environment. We used both methods to perform a position-tracking task. Direct-position control of the wrist provided an upper bound for human-operator capabilities. The results demonstrated that an EMG control interface is as effective as force control for the position-tracking task. We also examined the effects of gain and tracking frequency on EMG control to explore the limits of this control interface. We found that information transmission rates for myoelectric control were best at higher tracking frequencies than at the frequencies previously reported for position control. The results may be useful for the design of prostheses and prosthetic controllers.

  20. The myokinetic control interface: tracking implanted magnets as a means for prosthetic control.

    PubMed

    Tarantino, S; Clemente, F; Barone, D; Controzzi, M; Cipriani, C

    2017-12-07

    Upper limb amputation deprives individuals of their innate ability to manipulate objects. Such disability can be restored with a robotic prosthesis linked to the brain by a human-machine interface (HMI) capable of decoding voluntary intentions, and sending motor commands to the prosthesis. Clinical or research HMIs rely on the interpretation of electrophysiological signals recorded from the muscles. However, the quest for an HMI that allows for arbitrary and physiologically appropriate control of dexterous prostheses, is far from being completed. Here we propose a new HMI that aims to track the muscles contractions with implanted permanent magnets, by means of magnetic field sensors. We called this a myokinetic control interface. We present the concept, the features and a demonstration of a prototype which exploits six 3-axis sensors to localize four magnets implanted in a forearm mockup, for the control of a dexterous hand prosthesis. The system proved highly linear (R 2  = 0.99) and precise (1% repeatability), yet exhibiting short computation delay (45 ms) and limited cross talk errors (10% the mean stroke of the magnets). Our results open up promising possibilities for amputees, demonstrating the viability of the myokinetic approach in implementing direct and simultaneous control over multiple digits of an artificial hand.

  1. Interfaces for Advanced Computing.

    ERIC Educational Resources Information Center

    Foley, James D.

    1987-01-01

    Discusses the coming generation of supercomputers that will have the power to make elaborate "artificial realities" that facilitate user-computer communication. Illustrates these technological advancements with examples of the use of head-mounted monitors which are connected to position and orientation sensors, and gloves that track finger and…

  2. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    PubMed Central

    Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.

    2016-01-01

    We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device

  3. Technical Note: Construction of heterogeneous head phantom for quality control in stereotactic radiosurgery.

    PubMed

    Najafi, Mohsen; Teimouri, Javad; Shirazi, Alireza; Geraily, Ghazale; Esfahani, Mahbod; Shafaei, Mostafa

    2017-10-01

    Stereotactic radiosurgery is a high precision modality for conformally delivering high doses of radiation to the brain lesion with a large dose volume. Several studies for the quality control of this technique were performed to measure the dose delivered to the target with a homogenous head phantom and some dosimeters. Some studies were also performed with one or two instances of heterogeneity in the head phantom to measure the dose delivered to the target. But these studies assumed the head as a sphere and simple shape heterogeneity. The construction of an adult human head phantom with the same size, shape, and real inhomogeneity as an adult human head is needed. Only then is measuring the accurate dose delivered to the area of interest and comparison with the calculated dose possible. According to the ICRU Report 44, polytetrafluoroethylene (PTFE) and methyl methacrylate were selected as a bone and soft tissue, respectively. A set of computed tomography (CT) scans from a standard human head were taken, and simplification of the CT images was used to design the layers of the phantom. The parts of each slice were cut and attached together. Tests of density and CT number were done to compare the material of the phantom with tissues of the head. The dose delivered to the target was measured with an EBT3 film. The density of the PTFE and Plexiglas that were inserted in the phantom are in good agreement with bone and soft tissue. Also, the CT numbers of these materials have a low difference. The dose distribution from the EBT3 film and the treatment planning system is similar. The constructed phantom with a size and inhomogeneity like an adult human head is suitable to measure the dose delivered to the area of interest. It also helps make an accurate comparison with the calculated dose by the treatment planning system. By using this phantom, the actual dose delivered to the target was obtained. This anthropomorphic head phantom can be used in other modalities of

  4. Design of Flight Control Panel Layout using Graphical User Interface in MATLAB

    NASA Astrophysics Data System (ADS)

    Wirawan, A.; Indriyanto, T.

    2018-04-01

    This paper introduces the design of Flight Control Panel (FCP) Layout using Graphical User Interface in MATLAB. The FCP is the interface to give the command to the simulation and to monitor model variables while the simulation is running. The command accommodates by the FCP are altitude command, the angle of sideslip command, heading command, and setting command for turbulence model. The FCP was also designed to monitor the flight parameter while the simulation is running.

  5. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

    PubMed

    Crouser, R J; Chang, R

    2012-12-01

    Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.

  6. The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology

    PubMed Central

    Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Fazli, Siamac; Sannelli, Claudia; Haufe, Stefan; Maeder, Cecilia; Ramsey, Lenny; Sturm, Irene; Curio, Gabriel; Müller, Klaus-Robert

    2010-01-01

    Brain–computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies. PMID:21165175

  7. Mental workload during brain-computer interface training.

    PubMed

    Felton, Elizabeth A; Williams, Justin C; Vanderheiden, Gregg C; Radwin, Robert G

    2012-01-01

    It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.

  8. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design

    PubMed Central

    Lotte, Fabien; Larrue, Florian; Mühl, Christian

    2013-01-01

    While recent research on Brain-Computer Interfaces (BCI) has highlighted their potential for many applications, they remain barely used outside laboratories. The main reason is their lack of robustness. Indeed, with current BCI, mental state recognition is usually slow and often incorrect. Spontaneous BCI (i.e., mental imagery-based BCI) often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable ElectroEncephaloGraphy (EEG) patterns (spontaneous BCI control being widely acknowledged as a skill) while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the human in the loop. However, how well the user masters the BCI skill is also a key element explaining BCI robustness. Indeed, if the user is not able to produce stable and distinct EEG patterns, then no signal processing algorithm would be able to recognize them. Unfortunately, despite the importance of BCI training protocols, they have been scarcely studied so far, and used mostly unchanged for years. In this paper, we advocate that current human training approaches for spontaneous BCI are most likely inappropriate. We notably study instructional design literature in order to identify the key requirements and guidelines for a successful training procedure that promotes a good and efficient skill learning. This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal. We therefore identify the flaws in BCI training protocols according to instructional design principles, at several levels: in the instructions provided to the user, in the tasks he/she has to perform, and in the feedback provided. For each level, we propose new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more

  9. Head Motion Modeling for Human Behavior Analysis in Dyadic Interaction

    PubMed Central

    Xiao, Bo; Georgiou, Panayiotis; Baucom, Brian; Narayanan, Shrikanth S.

    2015-01-01

    This paper presents a computational study of head motion in human interaction, notably of its role in conveying interlocutors’ behavioral characteristics. Head motion is physically complex and carries rich information; current modeling approaches based on visual signals, however, are still limited in their ability to adequately capture these important properties. Guided by the methodology of kinesics, we propose a data driven approach to identify typical head motion patterns. The approach follows the steps of first segmenting motion events, then parametrically representing the motion by linear predictive features, and finally generalizing the motion types using Gaussian mixture models. The proposed approach is experimentally validated using video recordings of communication sessions from real couples involved in a couples therapy study. In particular we use the head motion model to classify binarized expert judgments of the interactants’ specific behavioral characteristics where entrainment in head motion is hypothesized to play a role: Acceptance, Blame, Positive, and Negative behavior. We achieve accuracies in the range of 60% to 70% for the various experimental settings and conditions. In addition, we describe a measure of motion similarity between the interaction partners based on the proposed model. We show that the relative change of head motion similarity during the interaction significantly correlates with the expert judgments of the interactants’ behavioral characteristics. These findings demonstrate the effectiveness of the proposed head motion model, and underscore the promise of analyzing human behavioral characteristics through signal processing methods. PMID:26557047

  10. EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.

    PubMed

    Yin, Yue H; Fan, Yuan J; Xu, Li D

    2012-07-01

    Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.

  11. Personalized keystroke dynamics for self-powered human--machine interfacing.

    PubMed

    Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin

    2015-01-27

    The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.

  12. Using Simulation Speeds to Differentiate Controller Interface Concepts

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna; Pope, Alan

    2008-01-01

    This study investigated two concepts: (1) whether speeding a human-in-the-loop simulation (or the subject's "world") scales time stress in such a way as to cause primary task performance to reveal workload differences between experimental conditions and (2) whether using natural hand motions to control the attitude of an aircraft makes controlling the aircraft easier and more intuitive. This was accomplished by having pilots and non-pilots make altitude and heading changes using three different control inceptors at three simulation speeds. Results indicate that simulation speed does affect workload and controllability. The bank and pitch angle error was affected by simulation speed but not by a simulation speed by controller type interaction; this may have been due to the relatively easy flying task. Results also indicate that pilots could control the bank and pitch angle of an aircraft about equally as well with the glove as with the sidestick. Non-pilots approached the pilots ability to control the bank and pitch angle of an aircraft using the positional glove - where the hand angle is directly proportional to the commanded aircraft angle. Therefore, (1) changing the simulation speed lends itself to objectively indexing a subject s workload and may also aid in differentiating among interface concepts based upon performance if the task being studied is sufficiently challenging and (2) using natural body movements to mimic the movement of an airplane for attitude control is feasible.

  13. Step 1: Human System Integration (HSI) FY05 Pilot-Technology Interface Requirements for Command, Control, and Communications (C3)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The document provides the Human System Integration(HSI) high-level functional C3 HSI requirements for the interface to the pilot. Description includes (1) the information required by the pilot to have knowledge C3 system status, and (2) the control capability needed by the pilot to obtain C3 information. Fundamentally, these requirements provide the candidate C3 technology concepts with the necessary human-related elements to make them compatible with human capabilities and limitations. The results of the analysis describe how C3 operations and functions should interface with the pilot to provide the necessary C3 functionality to the UA-pilot system. Requirements and guidelines for C3 are partitioned into three categories: (1) Pilot-Air Traffic Control (ATC) Voice Communications (2) Pilot-ATC Data Communications, and (3) command and control of the unmanned aircraft (UA). Each requirement is stated and is supported with a rationale and associated reference(s).

  14. Neuroengineering Tools/Applications for Bidirectional Interfaces, Brain–Computer Interfaces, and Neuroprosthetic Implants – A Review of Recent Progress

    PubMed Central

    Rothschild, Ryan Mark

    2010-01-01

    The main focus of this review is to provide a holistic amalgamated overview of the most recent human in vivo techniques for implementing brain–computer interfaces (BCIs), bidirectional interfaces, and neuroprosthetics. Neuroengineering is providing new methods for tackling current difficulties; however neuroprosthetics have been studied for decades. Recent progresses are permitting the design of better systems with higher accuracies, repeatability, and system robustness. Bidirectional interfaces integrate recording and the relaying of information from and to the brain for the development of BCIs. The concepts of non-invasive and invasive recording of brain activity are introduced. This includes classical and innovative techniques like electroencephalography and near-infrared spectroscopy. Then the problem of gliosis and solutions for (semi-) permanent implant biocompatibility such as innovative implant coatings, materials, and shapes are discussed. Implant power and the transmission of their data through implanted pulse generators and wireless telemetry are taken into account. How sensation can be relayed back to the brain to increase integration of the neuroengineered systems with the body by methods such as micro-stimulation and transcranial magnetic stimulation are then addressed. The neuroprosthetic section discusses some of the various types and how they operate. Visual prosthetics are discussed and the three types, dependant on implant location, are examined. Auditory prosthetics, being cochlear or cortical, are then addressed. Replacement hand and limb prosthetics are then considered. These are followed by sections concentrating on the control of wheelchairs, computers and robotics directly from brain activity as recorded by non-invasive and invasive techniques. PMID:21060801

  15. Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface

    PubMed Central

    Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.

    2014-01-01

    Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079

  16. Countermanding eye-head gaze shifts in humans: marching orders are delivered to the head first.

    PubMed

    Corneil, Brian D; Elsley, James K

    2005-07-01

    The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged approximately 120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged approximately 190 ms or approximately 70-75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were approximately 25 and 90 ms

  17. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  18. Optical mass memory system (AMM-13). AMM/DBMS interface control document

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.

  19. A Microcomputer Interface for External Circuit Control.

    ERIC Educational Resources Information Center

    Gorham, D. A.

    1983-01-01

    Describes an interface designed to meet the requirements of an instrumentation teaching laboratory, particularly to develop computer-controlled digital circuitry while exploiting electrical drive properties of common transistor-transistor logic (TTL) devices, minimizing cost/number of components. Discusses decoding for Pet, switches, lights, and…

  20. Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control.

    PubMed

    Thorp, Elias B; Larson, Eric; Stepp, Cara E

    2014-01-01

    The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.

  1. Biomechanical responses of a pig head under blast loading: a computational simulation.

    PubMed

    Zhu, Feng; Skelton, Paul; Chou, Cliff C; Mao, Haojie; Yang, King H; King, Albert I

    2013-03-01

    A series of computational studies were performed to investigate the biomechanical responses of the pig head under a specific shock tube environment. A finite element model of the head of a 50-kg Yorkshire pig was developed with sufficient details, based on the Lagrangian formulation, and a shock tube model was developed using the multimaterial arbitrary Lagrangian-Eulerian (MMALE) approach. These two models were integrated and a fluid/solid coupling algorithm was used to simulate the interaction of the shock wave with the pig's head. The finite element model-predicted incident and intracranial pressure traces were in reasonable agreement with those obtained experimentally. Using the verified numerical model of the shock tube and pig head, further investigations were carried out to study the spatial and temporal distributions of pressure, shear stress, and principal strain within the head. Pressure enhancement was found in the skull, which is believed to be caused by shock wave reflection at the interface of the materials with distinct wave impedances. Brain tissue has a shock attenuation effect and larger pressures were observed in the frontal and occipital regions, suggesting a greater possibility of coup and contrecoup contusion. Shear stresses in the brain and deflection in the skull remained at a low level. Higher principal strains were observed in the brain near the foramen magnum, suggesting that there is a greater chance of cellular or vascular injuries in the brainstem region. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Gender Differences between Graphical User Interfaces and Command Line Interfaces in Computer Instruction.

    ERIC Educational Resources Information Center

    Barker, Dan L.

    This study focused primarily on two types of computer interfaces and the differences in academic performance that resulted from their use; it was secondarily designed to examine gender differences that may have existed before and after any change in interface. Much of the basic research in computer use was conducted with command line interface…

  3. A Dual-Mode Human Computer Interface Combining Speech and Tongue Motion for People with Severe Disabilities

    PubMed Central

    Huo, Xueliang; Park, Hangue; Kim, Jeonghee; Ghovanloo, Maysam

    2015-01-01

    We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and three individuals with high level spinal cord injuries at level C3–C5 indicated that the dTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry. PMID:23475380

  4. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  5. TangibleCubes — Implementation of Tangible User Interfaces through the Usage of Microcontroller and Sensor Technology

    NASA Astrophysics Data System (ADS)

    Setscheny, Stephan

    The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.

  6. Visual gate for brain-computer interfaces.

    PubMed

    Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H

    2009-01-01

    Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.

  7. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  8. Learning toward practical head pose estimation

    NASA Astrophysics Data System (ADS)

    Sang, Gaoli; He, Feixiang; Zhu, Rong; Xuan, Shibin

    2017-08-01

    Head pose is useful information for many face-related tasks, such as face recognition, behavior analysis, human-computer interfaces, etc. Existing head pose estimation methods usually assume that the face images have been well aligned or that sufficient and precise training data are available. In practical applications, however, these assumptions are very likely to be invalid. This paper first investigates the impact of the failure of these assumptions, i.e., misalignment of face images, uncertainty and undersampling of training data, on head pose estimation accuracy of state-of-the-art methods. A learning-based approach is then designed to enhance the robustness of head pose estimation to these factors. To cope with misalignment, instead of using hand-crafted features, it seeks suitable features by learning from a set of training data with a deep convolutional neural network (DCNN), such that the training data can be best classified into the correct head pose categories. To handle uncertainty and undersampling, it employs multivariate labeling distributions (MLDs) with dense sampling intervals to represent the head pose attributes of face images. The correlation between the features and the dense MLD representations of face images is approximated by a maximum entropy model, whose parameters are optimized on the given training data. To estimate the head pose of a face image, its MLD representation is first computed according to the model based on the features extracted from the image by the trained DCNN, and its head pose is then assumed to be the one corresponding to the peak in its MLD. Evaluation experiments on the Pointing'04, FacePix, Multi-PIE, and CASIA-PEAL databases prove the effectiveness and efficiency of the proposed method.

  9. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    PubMed

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Interfacing Computer Aided Parallelization and Performance Analysis

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)

    2003-01-01

    When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.

  11. Evaluation of a computerized aid for creating human behavioral representations of human-computer interaction.

    PubMed

    Williams, Kent E; Voigt, Jeffrey R

    2004-01-01

    The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.

  12. Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface

    PubMed Central

    Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong

    2017-01-01

    There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485

  13. The Berlin Brain-Computer Interface: Progress Beyond Communication and Control

    PubMed Central

    Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A.; Curio, Gabriel; Müller, Klaus-Robert

    2016-01-01

    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world. PMID:27917107

  14. The Berlin Brain-Computer Interface: Progress Beyond Communication and Control.

    PubMed

    Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A; Curio, Gabriel; Müller, Klaus-Robert

    2016-01-01

    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.

  15. Design considerations to improve cognitive ergonomic issues of unmanned vehicle interfaces utilizing video game controllers.

    PubMed

    Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J

    2012-01-01

    Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.

  16. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  17. Ten Design Points for the Human Interface to Instructional Multimedia.

    ERIC Educational Resources Information Center

    McFarland, Ronald D.

    1995-01-01

    Ten ways to design an effective Human-Computer Interface are explained. Highlights include material delivery that relates to user knowledge; appropriate screen presentations; attention value versus learning and recall; the relationship of packaging and message; the effectiveness of visuals and text; the use of color to enhance communication; the…

  18. A binary motor imagery tasks based brain-computer interface for two-dimensional movement control

    NASA Astrophysics Data System (ADS)

    Xia, Bin; Cao, Lei; Maysam, Oladazimi; Li, Jie; Xie, Hong; Su, Caixia; Birbaumer, Niels

    2017-12-01

    Objective. Two-dimensional movement control is a popular issue in brain-computer interface (BCI) research and has many applications in the real world. In this paper, we introduce a combined control strategy to a binary class-based BCI system that allows the user to move a cursor in a two-dimensional (2D) plane. Users focus on a single moving vector to control 2D movement instead of controlling vertical and horizontal movement separately. Approach. Five participants took part in a fixed-target experiment and random-target experiment to verify the effectiveness of the combination control strategy under the fixed and random routine conditions. Both experiments were performed in a virtual 2D dimensional environment and visual feedback was provided on the screen. Main results. The five participants achieved an average hit rate of 98.9% and 99.4% for the fixed-target experiment and the random-target experiment, respectively. Significance. The results demonstrate that participants could move the cursor in the 2D plane effectively. The proposed control strategy is based only on a basic two-motor imagery BCI, which enables more people to use it in real-life applications.

  19. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study.

    PubMed

    Laffont, Isabelle; Biard, Nicolas; Chalubert, Gérard; Delahoche, Laurent; Marhic, Bruno; Boyer, François C; Leroux, Christophe

    2009-10-01

    Laffont I, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, Leroux C. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Grasping robots are still difficult to use for persons with disabilities because of inadequate human-machine interfaces (HMIs). Our purpose was to evaluate the efficacy of a graphic interface enhanced by a panoramic camera to detect out-of-view objects and control a commercialized robotic grasping arm. Multicenter, open-label trial. Four French departments of physical and rehabilitation medicine. Control subjects (N=24; mean age, 33y) and 20 severely impaired patients (mean age, 44y; 5 with muscular dystrophies, 13 with traumatic tetraplegia, and 2 others) completed the study. None of these patients was able to grasp a 50-cL bottle without the robot. Participants were asked to grasp 6 objects scattered around their wheelchair using the robotic arm. They were able to select the desired object through the graphic interface available on their computer screen. Global success rate, time needed to select the object on the screen of the computer, number of clicks on the HMI, and satisfaction among users. We found a significantly lower success rate in patients (81.1% vs 88.7%; chi(2)P=.017). The duration of the task was significantly higher in patients (71.6s vs 39.1s; P<.001). We set a cut-off for the maximum duration at 79 seconds, representing twice the amount of time needed by the control subjects to complete the task. In these conditions, the success rate for the impaired participants was 65% versus 85.4% for control subjects. The mean number of clicks necessary to select the object with the HMI was very close in both groups: patients used (mean +/- SD) 7.99+/-6.07 clicks, whereas controls used 7.04+/-2.87 clicks. Considering the severity of patients' impairment, all these differences were considered tiny. Furthermore, a high satisfaction rate was reported for this population concerning the use of the

  20. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.

    PubMed

    Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C

    2016-03-01

    Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.

  1. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  2. Cortical excitability correlates with the event-related desynchronization during brain-computer interface control

    NASA Astrophysics Data System (ADS)

    Daly, Ian; Blanchard, Caroline; Holmes, Nicholas P.

    2018-04-01

    Objective. Brain-computer interfaces (BCIs) based on motor control have been suggested as tools for stroke rehabilitation. Some initial successes have been achieved with this approach, however the mechanism by which they work is not yet fully understood. One possible part of this mechanism is a, previously suggested, relationship between the strength of the event-related desynchronization (ERD), a neural correlate of motor imagination and execution, and corticospinal excitability. Additionally, a key component of BCIs used in neurorehabilitation is the provision of visual feedback to positively reinforce attempts at motor control. However, the ability of visual feedback of the ERD to modulate the activity in the motor system has not been fully explored. Approach. We investigate these relationships via transcranial magnetic stimulation delivered at different moments in the ongoing ERD related to hand contraction and relaxation during BCI control of a visual feedback bar. Main results. We identify a significant relationship between ERD strength and corticospinal excitability, and find that our visual feedback does not affect corticospinal excitability. Significance. Our results imply that efforts to promote functional recovery in stroke by targeting increases in corticospinal excitability may be aided by accounting for the time course of the ERD.

  3. TMS communications hardware. Volume 1: Computer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, J. S.; Weinrich, S. S.

    1979-01-01

    A prototpye coaxial cable bus communications system was designed to be used in the Trend Monitoring System (TMS) to connect intelligent graphics terminals (based around a Data General NOVA/3 computer) to a MODCOMP IV host minicomputer. The direct memory access (DMA) interfaces which were utilized for each of these computers are identified. It is shown that for the MODCOMP, an off-the-shell board was suitable, while for the NOVAs, custon interface circuitry was designed and implemented.

  4. Corrosion at the head-neck interface of current designs of modular femoral components: essential questions and answers relating to corrosion in modular head-neck junctions.

    PubMed

    Osman, K; Panagiotidou, A P; Khan, M; Blunn, G; Haddad, F S

    2016-05-01

    There is increasing global awareness of adverse reactions to metal debris and elevated serum metal ion concentrations following the use of second generation metal-on-metal total hip arthroplasties. The high incidence of these complications can be largely attributed to corrosion at the head-neck interface. Severe corrosion of the taper is identified most commonly in association with larger diameter femoral heads. However, there is emerging evidence of varying levels of corrosion observed in retrieved components with smaller diameter femoral heads. This same mechanism of galvanic and mechanically-assisted crevice corrosion has been observed in metal-on-polyethylene and ceramic components, suggesting an inherent biomechanical problem with current designs of the head-neck interface. We provide a review of the fundamental questions and answers clinicians and researchers must understand regarding corrosion of the taper, and its relevance to current orthopaedic practice. Cite this article: Bone Joint J 2016;98-B:579-84. ©2016 The British Editorial Society of Bone & Joint Surgery.

  5. Stereo depth and the control of locomotive heading

    NASA Astrophysics Data System (ADS)

    Rushton, Simon K.; Harris, Julie M.

    1998-04-01

    Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.

  6. Sequenced subjective accents for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Vlek, R. J.; Schaefer, R. S.; Gielen, C. C. A. M.; Farquhar, J. D. R.; Desain, P.

    2011-06-01

    Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min-1 over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.

  7. Realistic numerical modelling of human head tissue exposure to electromagnetic waves from cellular phones

    NASA Astrophysics Data System (ADS)

    Scarella, Gilles; Clatz, Olivier; Lanteri, Stéphane; Beaume, Grégory; Oudot, Steve; Pons, Jean-Philippe; Piperno, Sergo; Joly, Patrick; Wiart, Joe

    2006-06-01

    The ever-rising diffusion of cellular phones has brought about an increased concern for the possible consequences of electromagnetic radiation on human health. Possible thermal effects have been investigated, via experimentation or simulation, by several research projects in the last decade. Concerning numerical modeling, the power absorption in a user's head is generally computed using discretized models built from clinical MRI data. The vast majority of such numerical studies have been conducted using Finite Differences Time Domain methods, although strong limitations of their accuracy are due to heterogeneity, poor definition of the detailed structures of head tissues (staircasing effects), etc. In order to propose numerical modeling using Finite Element or Discontinuous Galerkin Time Domain methods, reliable automated tools for the unstructured discretization of human heads are also needed. Results presented in this article aim at filling the gap between human head MRI images and the accurate numerical modeling of wave propagation in biological tissues and its thermal effects. To cite this article: G. Scarella et al., C. R. Physique 7 (2006).

  8. Performance monitoring for brain-computer-interface actions.

    PubMed

    Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf

    2017-02-01

    When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Brain–computer interfaces: communication and restoration of movement in paralysis

    PubMed Central

    Birbaumer, Niels; Cohen, Leonardo G

    2007-01-01

    The review describes the status of brain–computer or brain–machine interface research. We focus on non-invasive brain–computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive

  10. Towards SSVEP-based, portable, responsive Brain-Computer Interface.

    PubMed

    Kaczmarek, Piotr; Salomon, Pawel

    2015-08-01

    A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.

  11. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces

    PubMed Central

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J.; Latorre, José M.; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis. PMID:29209193

  12. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    PubMed

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  13. A reductionist approach to the analysis of learning in brain-computer interfaces.

    PubMed

    Danziger, Zachary

    2014-04-01

    The complexity and scale of brain-computer interface (BCI) studies limit our ability to investigate how humans learn to use BCI systems. It also limits our capacity to develop adaptive algorithms needed to assist users with their control. Adaptive algorithm development is forced offline and typically uses static data sets. But this is a poor substitute for the online, dynamic environment where algorithms are ultimately deployed and interact with an adapting user. This work evaluates a paradigm that simulates the control problem faced by human subjects when controlling a BCI, but which avoids the many complications associated with full-scale BCI studies. Biological learners can be studied in a reductionist way as they solve BCI-like control problems, and machine learning algorithms can be developed and tested in closed loop with the subjects before being translated to full BCIs. The method is to map 19 joint angles of the hand (representing neural signals) to the position of a 2D cursor which must be piloted to displayed targets (a typical BCI task). An investigation is presented on how closely the joint angle method emulates BCI systems; a novel learning algorithm is evaluated, and a performance difference between genders is discussed.

  14. Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials.

    PubMed

    Trejo, Leonard J; Rosipal, Roman; Matthews, Bryan

    2006-06-01

    We have developed and tested two electroencephalogram (EEG)-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KPLS classifier to map power spectra of 62-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject's average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: 1) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal electrooculograms (EOG) signals, 2) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from 12 electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular

  15. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game

    PubMed Central

    de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549

  16. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game.

    PubMed

    Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.

  17. Engineering brain-computer interfaces: past, present and future.

    PubMed

    Hughes, M A

    2014-06-01

    Electricity governs the function of both nervous systems and computers. Whilst ions move in polar fluids to depolarize neuronal membranes, electrons move in the solid-state lattices of microelectronic semiconductors. Joining these two systems together, to create an iono-electric brain-computer interface, is an immense challenge. However, such interfaces offer (and in select clinical contexts have already delivered) a method of overcoming disability caused by neurological or musculoskeletal pathology. To fulfill their theoretical promise, several specific challenges demand consideration. Rate-limiting steps cover a diverse range of disciplines including microelectronics, neuro-informatics, engineering, and materials science. As those who work at the tangible interface between brain and outside world, neurosurgeons are well placed to contribute to, and inform, this cutting edge area of translational research. This article explores the historical background, status quo, and future of brain-computer interfaces; and outlines the challenges to progress and opportunities available to the clinical neurosciences community.

  18. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    PubMed

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD

  19. Brain-computer interface training combined with transcranial direct current stimulation in patients with chronic severe hemiparesis: Proof of concept study.

    PubMed

    Kasashima-Shindo, Yuko; Fujiwara, Toshiyuki; Ushiba, Junichi; Matsushika, Yayoi; Kamatani, Daiki; Oto, Misa; Ono, Takashi; Nishimoto, Atsuko; Shindo, Keiichiro; Kawakami, Michiyuki; Tsuji, Tetsuya; Liu, Meigen

    2015-04-01

    Brain-computer interface technology has been applied to stroke patients to improve their motor function. Event-related desynchronization during motor imagery, which is used as a brain-computer interface trigger, is sometimes difficult to detect in stroke patients. Anodal transcranial direct current stimulation (tDCS) is known to increase event-related desynchronization. This study investigated the adjunctive effect of anodal tDCS for brain-computer interface training in patients with severe hemiparesis. Eighteen patients with chronic stroke. A non-randomized controlled study. Subjects were divided between a brain-computer interface group and a tDCS- brain-computer interface group and participated in a 10-day brain-computer interface training. Event-related desynchronization was detected in the affected hemisphere during motor imagery of the affected fingers. The tDCS-brain-computer interface group received anodal tDCS before brain-computer interface training. Event-related desynchronization was evaluated before and after the intervention. The Fugl-Meyer Assessment upper extremity motor score (FM-U) was assessed before, immediately after, and 3 months after, the intervention. Event-related desynchronization was significantly increased in the tDCS- brain-computer interface group. The FM-U was significantly increased in both groups. The FM-U improvement was maintained at 3 months in the tDCS-brain-computer interface group. Anodal tDCS can be a conditioning tool for brain-computer interface training in patients with severe hemiparetic stroke.

  20. Brain-Computer Interfaces for 1-D and 2-D Cursor Control: Designs Using Volitional Control of the EEG Spectrum or Steady-State Visual Evoked Potentials

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Matthews, Bryan; Rosipal, Roman

    2005-01-01

    We have developed and tested two EEG-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KF LS classifier to map power spectra of 30-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject s average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: a) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal EOG signals, b) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from eight electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle

  1. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    PubMed

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Connections that Count: Brain-Computer Interface Enables the Profoundly Paralyzed to Communicate

    MedlinePlus

    ... Home Current Issue Past Issues Connections that Count: Brain-Computer Interface Enables the Profoundly Paralyzed to Communicate ... of this page please turn Javascript on. A brain-computer interface (BCI) system This brain-computer interface ( ...

  3. The Human Factors and Ergonomics of P300-Based Brain-Computer Interfaces

    PubMed Central

    Powers, J. Clark; Bieliaieva, Kateryna; Wu, Shuohao; Nam, Chang S.

    2015-01-01

    Individuals with severe neuromuscular impairments face many challenges in communication and manipulation of the environment. Brain-computer interfaces (BCIs) show promise in presenting real-world applications that can provide such individuals with the means to interact with the world using only brain waves. Although there has been a growing body of research in recent years, much relates only to technology, and not to technology in use—i.e., real-world assistive technology employed by users. This review examined the literature to highlight studies that implicate the human factors and ergonomics (HFE) of P300-based BCIs. We assessed 21 studies on three topics to speak directly to improving the HFE of these systems: (1) alternative signal evocation methods within the oddball paradigm; (2) environmental interventions to improve user performance and satisfaction within the constraints of current BCI systems; and (3) measures and methods of measuring user acceptance. We found that HFE is central to the performance of P300-based BCI systems, although researchers do not often make explicit this connection. Incorporation of measures of user acceptance and rigorous usability evaluations, increased engagement of disabled users as test participants, and greater realism in testing will help progress the advancement of P300-based BCI systems in assistive applications. PMID:26266424

  4. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  5. Head Lice: Prevention and Control

    MedlinePlus

    ... and General Public. Contact Us Parasites Home Prevention & Control Language: English (US) Español (Spanish) Recommend on Facebook ... that can be taken to help prevent and control the spread of head lice: Avoid head-to- ...

  6. Head injury assessment of non-lethal projectile impacts: A combined experimental/computational method.

    PubMed

    Sahoo, Debasis; Robbe, Cyril; Deck, Caroline; Meyer, Frank; Papy, Alexandre; Willinger, Remy

    2016-11-01

    The main objective of this study is to develop a methodology to assess this risk based on experimental tests versus numerical predictive head injury simulations. A total of 16 non-lethal projectiles (NLP) impacts were conducted with rigid force plate at three different ranges of impact velocity (120, 72 and 55m/s) and the force/deformation-time data were used for the validation of finite element (FE) NLP. A good accordance between experimental and simulation data were obtained during validation of FE NLP with high correlation value (>0.98) and peak force discrepancy of less than 3%. A state-of-the art finite element head model with enhanced brain and skull material laws and specific head injury criteria was used for numerical computation of NLP impacts. Frontal and lateral FE NLP impacts to the head model at different velocities were performed under LS-DYNA. It is the very first time that the lethality of NLP is assessed by axonal strain computation to predict diffuse axonal injury (DAI) in NLP impacts to head. In case of temporo-parietal impact the min-max risk of DAI is 0-86%. With a velocity above 99.2m/s there is greater than 50% risk of DAI for temporo-parietal impacts. All the medium- and high-velocity impacts are susceptible to skull fracture, with a percentage risk higher than 90%. This study provides tool for a realistic injury (DAI and skull fracture) assessment during NLP impacts to the human head. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  8. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics.more » Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user

  9. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  10. Fully Implanted Brain-Computer Interface in a Locked-In Patient with ALS.

    PubMed

    Vansteensel, Mariska J; Pels, Elmar G M; Bleichner, Martin G; Branco, Mariana P; Denison, Timothy; Freudenburg, Zachary V; Gosselaar, Peter; Leinders, Sacha; Ottens, Thomas H; Van Den Boom, Max A; Van Rijen, Peter C; Aarnoutse, Erik J; Ramsey, Nick F

    2016-11-24

    Options for people with severe paralysis who have lost the ability to communicate orally are limited. We describe a method for communication in a patient with late-stage amyotrophic lateral sclerosis (ALS), involving a fully implanted brain-computer interface that consists of subdural electrodes placed over the motor cortex and a transmitter placed subcutaneously in the left side of the thorax. By attempting to move the hand on the side opposite the implanted electrodes, the patient accurately and independently controlled a computer typing program 28 weeks after electrode placement, at the equivalent of two letters per minute. The brain-computer interface offered autonomous communication that supplemented and at times supplanted the patient's eye-tracking device. (Funded by the Government of the Netherlands and the European Union; ClinicalTrials.gov number, NCT02224469 .).

  11. Design of a microprocessor-based Control, Interface and Monitoring (CIM unit for turbine engine controls research

    NASA Technical Reports Server (NTRS)

    Delaat, J. C.; Soeder, J. F.

    1983-01-01

    High speed minicomputers were used in the past to implement advanced digital control algorithms for turbine engines. These minicomputers are typically large and expensive. It is desirable for a number of reasons to use microprocessor-based systems for future controls research. They are relatively compact, inexpensive, and are representative of the hardware that would be used for actual engine-mounted controls. The Control, Interface, and Monitoring Unit (CIM) contains a microprocessor-based controls computer, necessary interface hardware and a system to monitor while it is running an engine. It is presently being used to evaluate an advanced turbofan engine control algorithm.

  12. BCILAB: a platform for brain-computer interface development

    NASA Astrophysics Data System (ADS)

    Kothe, Christian Andreas; Makeig, Scott

    2013-10-01

    Objective. The past two decades have seen dramatic progress in our ability to model brain signals recorded by electroencephalography, functional near-infrared spectroscopy, etc., and to derive real-time estimates of user cognitive state, response, or intent for a variety of purposes: to restore communication by the severely disabled, to effect brain-actuated control and, more recently, to augment human-computer interaction. Continuing these advances, largely achieved through increases in computational power and methods, requires software tools to streamline the creation, testing, evaluation and deployment of new data analysis methods. Approach. Here we present BCILAB, an open-source MATLAB-based toolbox built to address the need for the development and testing of brain-computer interface (BCI) methods by providing an organized collection of over 100 pre-implemented methods and method variants, an easily extensible framework for the rapid prototyping of new methods, and a highly automated framework for systematic testing and evaluation of new implementations. Main results. To validate and illustrate the use of the framework, we present two sample analyses of publicly available data sets from recent BCI competitions and from a rapid serial visual presentation task. We demonstrate the straightforward use of BCILAB to obtain results compatible with the current BCI literature. Significance. The aim of the BCILAB toolbox is to provide the BCI community a powerful toolkit for methods research and evaluation, thereby helping to accelerate the pace of innovation in the field, while complementing the existing spectrum of tools for real-time BCI experimentation, deployment and use.

  13. Toward brain-actuated car applications: Self-paced control with a motor imagery-based brain-computer interface.

    PubMed

    Yu, Yang; Zhou, Zongtan; Yin, Erwei; Jiang, Jun; Tang, Jingsheng; Liu, Yadong; Hu, Dewen

    2016-10-01

    This study presented a paradigm for controlling a car using an asynchronous electroencephalogram (EEG)-based brain-computer interface (BCI) and presented the experimental results of a simulation performed in an experimental environment outside the laboratory. This paradigm uses two distinct MI tasks, imaginary left- and right-hand movements, to generate a multi-task car control strategy consisting of starting the engine, moving forward, turning left, turning right, moving backward, and stopping the engine. Five healthy subjects participated in the online car control experiment, and all successfully controlled the car by following a previously outlined route. Subject S1 exhibited the most satisfactory BCI-based performance, which was comparable to the manual control-based performance. We hypothesize that the proposed self-paced car control paradigm based on EEG signals could potentially be used in car control applications, and we provide a complementary or alternative way for individuals with locked-in disorders to achieve more mobility in the future, as well as providing a supplementary car-driving strategy to assist healthy people in driving a car. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. On the control of brain-computer interfaces by users with cerebral palsy.

    PubMed

    Daly, Ian; Billinger, Martin; Laparra-Hernández, José; Aloise, Fabio; García, Mariano Lloria; Faller, Josef; Scherer, Reinhold; Müller-Putz, Gernot

    2013-09-01

    Brain-computer interfaces (BCIs) have been proposed as a potential assistive device for individuals with cerebral palsy (CP) to assist with their communication needs. However, it is unclear how well-suited BCIs are to individuals with CP. Therefore, this study aims to investigate to what extent these users are able to gain control of BCIs. This study is conducted with 14 individuals with CP attempting to control two standard online BCIs (1) based upon sensorimotor rhythm modulations, and (2) based upon steady state visual evoked potentials. Of the 14 users, 8 are able to use one or other of the BCIs, online, with a statistically significant level of accuracy, without prior training. Classification results are driven by neurophysiological activity and not seen to correlate with occurrences of artifacts. However, many of these users' accuracies, while statistically significant, would require either more training or more advanced methods before practical BCI control would be possible. The results indicate that BCIs may be controlled by individuals with CP but that many issues need to be overcome before practical application use may be achieved. This is the first study to assess the ability of a large group of different individuals with CP to gain control of an online BCI system. The results indicate that six users could control a sensorimotor rhythm BCI and three a steady state visual evoked potential BCI at statistically significant levels of accuracy (SMR accuracies; mean ± STD, 0.821 ± 0.116, SSVEP accuracies; 0.422 ± 0.069). Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  16. Display analysis with the optimal control model of the human operator. [pilot-vehicle display interface and information processing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1977-01-01

    Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.

  17. Computation of Surface Laplacian for tri-polar ring electrodes on high-density realistic geometry head model.

    PubMed

    Junwei Ma; Han Yuan; Sunderam, Sridhar; Besio, Walter; Lei Ding

    2017-07-01

    Neural activity inside the human brain generate electrical signals that can be detected on the scalp. Electroencephalograph (EEG) is one of the most widely utilized techniques helping physicians and researchers to diagnose and understand various brain diseases. Due to its nature, EEG signals have very high temporal resolution but poor spatial resolution. To achieve higher spatial resolution, a novel tri-polar concentric ring electrode (TCRE) has been developed to directly measure Surface Laplacian (SL). The objective of the present study is to accurately calculate SL for TCRE based on a realistic geometry head model. A locally dense mesh was proposed to represent the head surface, where the local dense parts were to match the small structural components in TCRE. Other areas without dense mesh were used for the purpose of reducing computational load. We conducted computer simulations to evaluate the performance of the proposed mesh and evaluated possible numerical errors as compared with a low-density model. Finally, with achieved accuracy, we presented the computed forward lead field of SL for TCRE for the first time in a realistic geometry head model and demonstrated that it has better spatial resolution than computed SL from classic EEG recordings.

  18. WinTICS-24 --- A Telescope Control Interface for MS Windows

    NASA Astrophysics Data System (ADS)

    Hawkins, R. Lee

    1995-12-01

    WinTICS-24 is a telescope control system interface and observing assistant written in Visual Basic for MS Windows. It provides the ability to control a telescope and up to 3 other instruments via the serial ports on an IBM-PC compatible computer, all from one consistent user interface. In addition to telescope control, WinTICS contains an observing logbook, trouble log (which can automatically email its entries to a responsible person), lunar phase display, object database (which allows the observer to type in the name of an object and automatically slew to it), a time of minimum calculator for eclipsing binary stars, and an interface to the Guide CD-ROM for bringing up finder charts of the current telescope coordinates. Currently WinTICS supports control of DFM telescopes, but is easily adaptable to other telescopes and instrumentation.

  19. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    PubMed

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  20. A Magneto-Inductive Sensor Based Wireless Tongue-Computer Interface

    PubMed Central

    Huo, Xueliang; Wang, Jia; Ghovanloo, Maysam

    2015-01-01

    We have developed a noninvasive, unobtrusive magnetic wireless tongue-computer interface, called “Tongue Drive,” to provide people with severe disabilities with flexible and effective computer access and environment control. A small permanent magnet secured on the tongue by implantation, piercing, or tissue adhesives, is utilized as a tracer to track the tongue movements. The magnetic field variations inside and around the mouth due to the tongue movements are detected by a pair of three-axial linear magneto-inductive sensor modules mounted bilaterally on a headset near the user’s cheeks. After being wirelessly transmitted to a portable computer, the sensor output signals are processed by a differential field cancellation algorithm to eliminate the external magnetic field interference, and translated into user control commands, which could then be used to access a desktop computer, maneuver a powered wheelchair, or control other devices in the user’s environment. The system has been successfully tested on six able-bodied subjects for computer access by defining six individual commands to resemble mouse functions. Results show that the Tongue Drive system response time for 87% correctly completed commands is 0.8 s, which yields to an information transfer rate of ~130 b/min. PMID:18990653

  1. Using minimal human-computer interfaces for studying the interactive development of social awareness

    PubMed Central

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI) can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other's presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience. PMID:25309490

  2. Mold Heating and Cooling Pump Package Operator Interface Controls Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josh A. Salmond

    2009-08-07

    The modernization of the Mold Heating and Cooling Pump Package Operator Interface (MHC PP OI) consisted of upgrading the antiquated single board computer with a proprietary operating system to off-the-shelf hardware and off-the-shelf software with customizable software options. The pump package is the machine interface between a central heating and cooling system that pumps heat transfer fluid through an injection or compression mold base on a local plastic molding machine. The operator interface provides the intelligent means of controlling this pumping process. Strict temperature control of a mold allows the production of high quality parts with tight tolerances and lowmore » residual stresses. The products fabricated are used on multiple programs.« less

  3. Transfer of control system interface solutions from other domains to the thermal power industry.

    PubMed

    Bligård, L-O; Andersson, J; Osvalder, A-L

    2012-01-01

    In a thermal power plant the operators' roles are to control and monitor the process to achieve efficient and safe production. To achieve this, the human-machine interfaces have a central part. The interfaces need to be updated and upgraded together with the technical functionality to maintain optimal operation. One way of achieving relevant updates is to study other domains and see how they have solved similar issues in their design solutions. The purpose of this paper is to present how interface design solution ideas can be transferred from domains with operator control to thermal power plants. In the study 15 domains were compared using a model for categorisation of human-machine systems. The result from the domain comparison showed that nuclear power, refinery and ship engine control were most similar to thermal power control. From the findings a basic interface structure and three specific display solutions were proposed for thermal power control: process parameter overview, plant overview, and feed water view. The systematic comparison of the properties of a human-machine system allowed interface designers to find suitable objects, structures and navigation logics in a range of domains that could be transferred to the thermal power domain.

  4. Software platform for rapid prototyping of NIRS brain computer interfacing techniques.

    PubMed

    Matthews, Fiachra; Soraghan, Christopher; Ward, Tomas E; Markham, Charles; Pearlmutter, Barak A

    2008-01-01

    This paper describes the control system of a next-generation optical brain-computer interface (BCI). Using functional near-infrared spectroscopy (fNIRS) as a BCI modality is a relatively new concept, and research has only begun to explore approaches for its implementation. It is necessary to have a system by which it is possible to investigate the signal processing and classification techniques available in the BCI community. Most importantly, these techniques must be easily testable in real-time applications. The system we describe was built using LABVIEW, a graphical programming language designed for interaction with National Instruments hardware. This platform allows complete configurability from hardware control and regulation, testing and filtering in a graphical interface environment.

  5. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  6. Plug&Play Brain-Computer Interfaces for effective Active and Assisted Living control.

    PubMed

    Mora, Niccolò; De Munari, Ilaria; Ciampolini, Paolo; Del R Millán, José

    2017-08-01

    Brain-Computer Interfaces (BCI) rely on the interpretation of brain activity to provide people with disabilities with an alternative/augmentative interaction path. In light of this, BCI could be considered as enabling technology in many fields, including Active and Assisted Living (AAL) systems control. Interaction barriers could be removed indeed, enabling user with severe motor impairments to gain control over a wide range of AAL features. In this paper, a cost-effective BCI solution, targeted (but not limited) to AAL system control is presented. A custom hardware module is briefly reviewed, while signal processing techniques are covered in more depth. Steady-state visual evoked potentials (SSVEP) are exploited in this work as operating BCI protocol. In contrast with most common SSVEP-BCI approaches, we propose the definition of a prediction confidence indicator, which is shown to improve overall classification accuracy. The confidence indicator is derived without any subject-specific approach and is stable across users: it can thus be defined once and then shared between different persons. This allows some kind of Plug&Play interaction. Furthermore, by modelling rest/idle periods with the confidence indicator, it is possible to detect active control periods and separate them from "background activity": this is capital for real-time, self-paced operation. Finally, the indicator also allows to dynamically choose the most appropriate observation window length, improving system's responsiveness and user's comfort. Good results are achieved under such operating conditions, achieving, for instance, a false positive rate of 0.16 min -1 , which outperform current literature findings.

  7. Experiment on a novel user input for computer interface utilizing tongue input for the severely disabled.

    PubMed

    Kencana, Andy Prima; Heng, John

    2008-11-01

    This paper introduces a novel passive tongue control and tracking device. The device is intended to be used by the severely disabled or quadriplegic person. The main focus of this device when compared to the other existing tongue tracking devices is that the sensor employed is passive which means it requires no powered electrical sensor to be inserted into the user's mouth and hence no trailing wires. This haptic interface device employs the use of inductive sensors to track the position of the user's tongue. The device is able perform two main PC functions that of the keyboard and mouse function. The results show that this device allows the severely disabled person to have some control in his environment, such as to turn on and off or control daily electrical devices or appliances; or to be used as a viable PC Human Computer Interface (HCI) by tongue control. The operating principle and set-up of such a novel passive tongue HCI has been established with successful laboratory trials and experiments. Further clinical trials will be required to test out the device on disabled persons before it is ready for future commercial development.

  8. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  9. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    PubMed Central

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  10. Human-system interfaces for space cognitive awareness

    NASA Astrophysics Data System (ADS)

    Ianni, J.

    Space situational awareness is a human activity. We have advanced sensors and automation capabilities but these continue to be tools for humans to use. The reality is, however, that humans cannot take full advantage of the power of these tools due to time constraints, cognitive limitations, poor tool integration, poor human-system interfaces, and other reasons. Some excellent tools may never be used in operations and, even if they were, they may not be well suited to provide a cohesive and comprehensive picture. Recognizing this, the Air Force Research Laboratory (AFRL) is applying cognitive science principles to increase the knowledge derived from existing tools and creating new capabilities to help space analysts and decision makers. At the center of this research is Sensemaking Support Environment technology. The concept is to create cognitive-friendly computer environments that connect critical and creative thinking for holistic decision making. AFRL is also investigating new visualization technologies for multi-sensor exploitation and space weather, human-to-human collaboration technologies, and other technology that will be discussed in this paper.

  11. Computational modeling of human head under blast in confined and open spaces: primary blast injury.

    PubMed

    Rezaei, A; Salimi Jazi, M; Karami, G

    2014-01-01

    In this paper, a computational modeling for biomechanical analysis of primary blast injuries is presented. The responses of the brain in terms of mechanical parameters under different blast spaces including open, semi-confined, and confined environments are studied. In the study, the effect of direct and indirect blast waves from the neighboring walls in the confined environments will be taken into consideration. A 50th percentile finite element head model is exposed to blast waves of different intensities. In the open space, the head experiences a sudden intracranial pressure (ICP) change, which vanishes in a matter of a few milliseconds. The situation is similar in semi-confined space, but in the confined space, the reflections from the walls will create a number of subsequent peaks in ICP with a longer duration. The analysis procedure is based on a simultaneous interaction simulation of the deformable head and its components with the blast wave propagations. It is concluded that compared with the open and semi-confined space settings, the walls in the confined space scenario enhance the risk of primary blast injuries considerably because of indirect blast waves transferring a larger amount of damaging energy to the head. Copyright © 2013 John Wiley & Sons, Ltd.

  12. The impact of goal-oriented task design on neurofeedback learning for brain-computer interface control.

    PubMed

    McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T

    2018-02-01

    Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.

  13. Hardware enhance of brain computer interfaces

    NASA Astrophysics Data System (ADS)

    Wu, Jerry; Szu, Harold; Chen, Yuechen; Guo, Ran; Gu, Xixi

    2015-05-01

    The history of brain-computer interfaces (BCIs) starts with Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). Recent years, BCI researches are focused on Invasive, Partially invasive, and Non-invasive BCI. Furthermore, EEG can be also applied to telepathic communication which could provide the basis for brain-based communication using imagined speech. It is possible to use EEG signals to discriminate the vowels and consonants embedded in spoken and in imagined words and apply to military product. In this report, we begin with an example of using high density EEG with high electrode density and analysis the results by using BCIs. The BCIs in this work is enhanced by A field-programmable gate array (FPGA) board with optimized two dimension (2D) image Fast Fourier Transform (FFT) analysis.

  14. Interface standards for computer equipment

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The ability to configure data systems using modules provided by independent manufacturers is complicated by the wide range of electrical, mechanical, and functional characteristics exhibited within the equipment provided by different manufacturers of computers, peripherals, and terminal devices. A number of international organizations were and still are involved in the creation of standards that enable devices to be interconnected with minimal difficulty, usually involving only a cable or data bus connection that is defined by the standard. The elements covered by an interface standard are covered and the most prominent interface standards presently in use are identified and described.

  15. Crew interface analysis: Selected articles on space human factors research, 1987 - 1991

    NASA Technical Reports Server (NTRS)

    Bagian, Tandi (Compiler)

    1993-01-01

    As part of the Flight Crew Support Division at NASA, the Crew Interface Analysis Section is dedicated to the study of human factors in the manned space program. It assumes a specialized role that focuses on answering operational questions pertaining to NASA's Space Shuttle and Space Station Freedom Programs. One of the section's key contributions is to provide knowledge and information about human capabilities and limitations that promote optimal spacecraft and habitat design and use to enhance crew safety and productivity. The section provides human factors engineering for the ongoing missions as well as proposed missions that aim to put human settlements on the Moon and Mars. Research providing solutions to operational issues is the primary objective of the Crew Interface Analysis Section. The studies represent such subdisciplines as ergonomics, space habitability, man-computer interaction, and remote operator interaction.

  16. A self-paced brain-computer interface for controlling a robot simulator: an online event labelling paradigm and an extended Kalman filter based algorithm for online training.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J

    2009-03-01

    Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.

  17. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  18. Media independent interface. Interface control document

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A Media Independent Interface (MII) is specified, using current standards in the industry. The MII is described in hierarchical fashion. At the base are IEEE/International Standards Organization (ISO) documents (standards) which describe the functionality of the software modules or layers and their interconnection. These documents describe primitives which are to transcent the MII. The intent of the MII is to provide a universal interface to one or more Media Access Contols (MACs) for the Logical Link Controller and Station Manager. This interface includes both a standardized electrical and mechanical interface and a standardized functional specification which defines the services expected from the MAC.

  19. The control of float zone interfaces by the use of selected boundary conditions

    NASA Technical Reports Server (NTRS)

    Foster, L. M.; Mcintosh, J.

    1983-01-01

    The main goal of the float zone crystal growth project of NASA's Materials Processing in Space Program is to thoroughly understand the molten zone/freezing crystal system and all the mechanisms that govern this system. The surface boundary conditions required to give flat float zone solid melt interfaces were studied and computed. The results provide float zone furnace designers with better methods for controlling solid melt interface shapes and for computing thermal profiles and gradients. Documentation and a user's guide were provided for the computer software.

  20. Efficacy of brain-computer interface-driven neuromuscular electrical stimulation for chronic paresis after stroke.

    PubMed

    Mukaino, Masahiko; Ono, Takashi; Shindo, Keiichiro; Fujiwara, Toshiyuki; Ota, Tetsuo; Kimura, Akio; Liu, Meigen; Ushiba, Junichi

    2014-04-01

    Brain computer interface technology is of great interest to researchers as a potential therapeutic measure for people with severe neurological disorders. The aim of this study was to examine the efficacy of brain computer interface, by comparing conventional neuromuscular electrical stimulation and brain computer interface-driven neuromuscular electrical stimulation, using an A-B-A-B withdrawal single-subject design. A 38-year-old male with severe hemiplegia due to a putaminal haemorrhage participated in this study. The design involved 2 epochs. In epoch A, the patient attempted to open his fingers during the application of neuromuscular electrical stimulation, irrespective of his actual brain activity. In epoch B, neuromuscular electrical stimulation was applied only when a significant motor-related cortical potential was observed in the electroencephalogram. The subject initially showed diffuse functional magnetic resonance imaging activation and small electro-encephalogram responses while attempting finger movement. Epoch A was associated with few neurological or clinical signs of improvement. Epoch B, with a brain computer interface, was associated with marked lateralization of electroencephalogram (EEG) and blood oxygenation level dependent responses. Voluntary electromyogram (EMG) activity, with significant EEG-EMG coherence, was also prompted. Clinical improvement in upper-extremity function and muscle tone was observed. These results indicate that self-directed training with a brain computer interface may induce activity- dependent cortical plasticity and promote functional recovery. This preliminary clinical investigation encourages further research using a controlled design.

  1. Pre-frontal control of closed-loop limbic neurostimulation by rodents using a brain-computer interface

    NASA Astrophysics Data System (ADS)

    Widge, Alik S.; Moritz, Chet T.

    2014-04-01

    Objective. There is great interest in closed-loop neurostimulators that sense and respond to a patient's brain state. Such systems may have value for neurological and psychiatric illnesses where symptoms have high intraday variability. Animal models of closed-loop stimulators would aid preclinical testing. We therefore sought to demonstrate that rodents can directly control a closed-loop limbic neurostimulator via a brain-computer interface (BCI). Approach. We trained rats to use an auditory BCI controlled by single units in prefrontal cortex (PFC). The BCI controlled electrical stimulation in the medial forebrain bundle, a limbic structure involved in reward-seeking. Rigorous offline analyses were performed to confirm volitional control of the neurostimulator. Main results. All animals successfully learned to use the BCI and neurostimulator, with closed-loop control of this challenging task demonstrated at 80% of PFC recording locations. Analysis across sessions and animals confirmed statistically robust BCI control and specific, rapid modulation of PFC activity. Significance. Our results provide a preliminary demonstration of a method for emotion-regulating closed-loop neurostimulation. They further suggest that activity in PFC can be used to control a BCI without pre-training on a predicate task. This offers the potential for BCI-based treatments in refractory neurological and mental illness.

  2. Analogue mouse pointer control via an online steady state visual evoked potential (SSVEP) brain-computer interface

    NASA Astrophysics Data System (ADS)

    Wilson, John J.; Palaniappan, Ramaswamy

    2011-04-01

    The steady state visual evoked protocol has recently become a popular paradigm in brain-computer interface (BCI) applications. Typically (regardless of function) these applications offer the user a binary selection of targets that perform correspondingly discrete actions. Such discrete control systems are appropriate for applications that are inherently isolated in nature, such as selecting numbers from a keypad to be dialled or letters from an alphabet to be spelled. However motivation exists for users to employ proportional control methods in intrinsically analogue tasks such as the movement of a mouse pointer. This paper introduces an online BCI in which control of a mouse pointer is directly proportional to a user's intent. Performance is measured over a series of pointer movement tasks and compared to the traditional discrete output approach. Analogue control allowed subjects to move the pointer faster to the cued target location compared to discrete output but suffers more undesired movements overall. Best performance is achieved when combining the threshold to movement of traditional discrete techniques with the range of movement offered by proportional control.

  3. INL Multi-Robot Control Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robot’s condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.

  4. Interfacing An Intelligent Decision-Maker To A Real-Time Control System

    NASA Astrophysics Data System (ADS)

    Evers, D. C.; Smith, D. M.; Staros, C. J.

    1984-06-01

    This paper discusses some of the practical aspects of implementing expert systems in a real-time environment. There is a conflict between the needs of a process control system and the computational load imposed by intelligent decision-making software. The computation required to manage a real-time control problem is primarily concerned with routine calculations which must be executed in real time. On most current hardware, non-trivial AI software should not be forced to operate under real-time constraints. In order for the system to work efficiently, the two processes must be separated by a well-defined interface. Although the precise nature of the task separation will vary with the application, the definition of the interface will need to follow certain fundamental principles in order to provide functional separation. This interface was successfully implemented in the expert scheduling software currently running the automated chemical processing facility at Lockheed-Georgia. Potential applications of this concept in the areas of airborne avionics and robotics will be discussed.

  5. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  6. User Interface Developed for Controls/CFD Interdisciplinary Research

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center, in conjunction with the University of Akron, is developing analytical methods and software tools to create a cross-discipline "bridge" between controls and computational fluid dynamics (CFD) technologies. Traditionally, the controls analyst has used simulations based on large lumping techniques to generate low-order linear models convenient for designing propulsion system controls. For complex, high-speed vehicles such as the High Speed Civil Transport (HSCT), simulations based on CFD methods are required to capture the relevant flow physics. The use of CFD should also help reduce the development time and costs associated with experimentally tuning the control system. The initial application for this research is the High Speed Civil Transport inlet control problem. A major aspect of this research is the development of a controls/CFD interface for non-CFD experts, to facilitate the interactive operation of CFD simulations and the extraction of reduced-order, time-accurate models from CFD results. A distributed computing approach for implementing the interface is being explored. Software being developed as part of the Integrated CFD and Experiments (ICE) project provides the basis for the operating environment, including run-time displays and information (data base) management. Message-passing software is used to communicate between the ICE system and the CFD simulation, which can reside on distributed, parallel computing systems. Initially, the one-dimensional Large-Perturbation Inlet (LAPIN) code is being used to simulate a High Speed Civil Transport type inlet. LAPIN can model real supersonic inlet features, including bleeds, bypasses, and variable geometry, such as translating or variable-ramp-angle centerbodies. Work is in progress to use parallel versions of the multidimensional NPARC code.

  7. An online brain-machine interface using decoding of movement direction from the human electrocorticogram

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten

    2012-08-01

    A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.

  8. An Architectural Experience for Interface Design

    ERIC Educational Resources Information Center

    Gong, Susan P.

    2016-01-01

    The problem of human-computer interface design was brought to the foreground with the emergence of the personal computer, the increasing complexity of electronic systems, and the need to accommodate the human operator in these systems. With each new technological generation discovering the interface design problems of its own technologies, initial…

  9. A finite element study on the mechanical response of the head-neck interface of hip implants under realistic forces and moments of daily activities: Part 1, level walking.

    PubMed

    Farhoudi, Hamidreza; Fallahnezhad, Khosro; Oskouei, Reza H; Taylor, Mark

    2017-11-01

    This paper investigates the mechanical response of a modular head-neck interface of hip joint implants under realistic loads of level walking. The realistic loads of the walking activity consist of three dimensional gait forces and the associated frictional moments. These forces and moments were extracted for a 32mm metal-on-metal bearing couple. A previously reported geometry of a modular CoCr/CoCr head-neck interface with a proximal contact was used for this investigation. An explicit finite element analysis was performed to investigate the interface mechanical responses. To study the level of contribution and also the effect of superposition of the load components, three different scenarios of loading were studied: gait forces only, frictional moments only, and combined gait forces and frictional moments. Stress field, micro-motions, shear stresses and fretting work at the contacting nodes of the interface were analysed. Gait forces only were found to significantly influence the mechanical environment of the head-neck interface by temporarily extending the contacting area (8.43% of initially non-contacting surface nodes temporarily came into contact), and therefore changing the stress field and resultant micro-motions during the gait cycle. The frictional moments only did not cause considerable changes in the mechanical response of the interface (only 0.27% of the non-contacting surface nodes temporarily came into contact). However, when superposed with the gait forces, the mechanical response of the interface, particularly micro-motions and fretting work, changed compared to the forces only case. The normal contact stresses and micro-motions obtained from this realistic load-controlled study were typically in the range of 0-275MPa and 0-38µm, respectively. These ranges were found comparable to previous experimental displacement-controlled pin/cylinder-on-disk fretting corrosion studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Integrated Computer Controlled Glow Discharge Tube

    NASA Astrophysics Data System (ADS)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  11. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  12. 'Goats that stare at men': dwarf goats alter their behaviour in response to human head orientation, but do not spontaneously use head direction as a cue in a food-related context.

    PubMed

    Nawroth, Christian; von Borell, Eberhard; Langbein, Jan

    2015-01-01

    Recently, comparative research on the mechanisms and species-specific adaptive values of attributing attentive states and using communicative cues has gained increased interest, particularly in non-human primates, birds, and dogs. Here, we investigate these phenomena in a farm animal species, the dwarf goat (Capra aegagrus hircus). In the first experiment, we investigated the effects of different human head and body orientations, as well as human experimenter presence/absence, on the behaviour of goats in a food-anticipating paradigm. Over a 30-s interval, the experimenter engaged in one of four different postures or behaviours (head and body towards the subject-'Control', head to the side, head and body away from the subject, or leaving the room) before delivering a reward. We found that the level of subjects' active anticipatory behaviour was highest in the control condition and decreased with a decreasing level of attention paid to the subject by the experimenter. Additionally, goats 'stared' (i.e. stood alert) at the experimental set-up for significantly more time when the experimenter was present but paid less attention to the subject ('Head' and 'Back' condition) than in the 'Control' and 'Out' conditions. In a second experiment, the experimenter provided different human-given cues that indicated the location of a hidden food reward in a two-way object choice task. Goats were able to use both 'Touch' and 'Point' cues to infer the correct location of the reward but did not perform above the level expected by chance in the 'Head only' condition. We conclude that goats are able to differentiate among different body postures of a human, including head orientation; however, despite their success at using multiple physical human cues, they fail to spontaneously use human head direction as a cue in a food-related context.

  13. ASTEC: Controls analysis for personal computers

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  14. Human Machine Interface Programming and Testing

    NASA Technical Reports Server (NTRS)

    Foster, Thomas Garrison

    2013-01-01

    Human Machine Interface (HMI) Programming and Testing is about creating graphical displays to mimic mission critical ground control systems in order to provide NASA engineers with the ability to monitor the health management of these systems in real time. The Health Management System (HMS) is an online interactive human machine interface system that monitors all Kennedy Ground Control Subsystem (KGCS) hardware in the field. The Health Management System is essential to NASA engineers because it allows remote control and monitoring of the health management systems of all the Programmable Logic Controllers (PLC) and associated field devices. KGCS will have equipment installed at the launch pad, Vehicle Assembly Building, Mobile Launcher, as well as the Multi-Purpose Processing Facility. I am designing graphical displays to monitor and control new modules that will be integrated into the HMS. The design of the display screen will closely mimic the appearance and functionality of the actual modules. There are many different field devices used to monitor health management and each device has its own unique set of health management related data, therefore each display must also have its own unique way to display this data. Once the displays are created, the RSLogix5000 application is used to write software that maps all the required data read from the hardware to the graphical display. Once this data is mapped to its corresponding display item, the graphical display and hardware device will be connected through the same network in order to test all possible scenarios and types of data the graphical display was designed to receive. Test Procedures will be written to thoroughly test out the displays and ensure that they are working correctly before being deployed to the field. Additionally, the Kennedy Ground Controls Subsystem's user manual will be updated to explain to the NASA engineers how to use the new module displays.

  15. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    NASA Astrophysics Data System (ADS)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  16. Human Factors Considerations in System Design

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M. (Editor); Vanbalen, P. M. (Editor); Moe, K. L. (Editor)

    1983-01-01

    Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments.

  17. SAR Simulation with Magneto Chiral Effects for Human Head Radiated from Cellular Phones

    NASA Astrophysics Data System (ADS)

    Torres-Silva, H.

    2008-09-01

    A numerical method for a microwave signal emitted by a cellular phone, propagating in a magneto-chiral media, characterized by an extended Born-Fedorov formalism, is presented. It is shown that the use of a cell model, combined with a real model of the human head, derived from the magnetic resonance of images allows a good determination of the near fields induced in the head when the brain chirality and the battery magnetic field are considered together. The results on a 2-Dim human head model show the evolution of the specific absorption rate, (SAR coefficient) and the spatial peak specific absorption rate which are sensitives to the magneto-chiral factor, which is important in the brain layer. For GSM/PCN phones, extremely low frequency real pulsed magnetic fields (in the order of 10 to 60 milligauss) are added to the model through the whole of the user's head. The more important conclusion of our work is that the head absorption is bigger than the results for a classical model without the magneto chiral effect. Hot spots are produced due to the combination of microwave and the magnetic field produced by the phone's operation. The FDTD method was used to compute the SARs inside the MRI based head models consisting of various tissues for 1.8 GHz. As a result, we found that in the head model having more than four kinds of tissue, the localized peak SAR reaches maximum inside the head for over five tissues including skin, bone, blood and brain cells.

  18. Factors in Human-Computer Interface Design (A Pilot Study).

    DTIC Science & Technology

    1994-12-01

    This study used a pretest - posttest control group experimental design to test the effect of consistency on speed, retention, and user satisfaction. Four...analysis. The overall methodology was a pretest - posttest control group experimental design using different prototypes to test the effects of...methodology used for this study was a pretest - posttest control group experimental design using different prototypes to test for features of the human

  19. Biased feedback in brain-computer interfaces.

    PubMed

    Barbero, Alvaro; Grosse-Wentrup, Moritz

    2010-07-27

    Even though feedback is considered to play an important role in learning how to operate a brain-computer interface (BCI), to date no significant influence of feedback design on BCI-performance has been reported in literature. In this work, we adapt a standard motor-imagery BCI-paradigm to study how BCI-performance is affected by biasing the belief subjects have on their level of control over the BCI system. Our findings indicate that subjects already capable of operating a BCI are impeded by inaccurate feedback, while subjects normally performing on or close to chance level may actually benefit from an incorrect belief on their performance level. Our results imply that optimal feedback design in BCIs should take into account a subject's current skill level.

  20. A brain-computer interface to support functional recovery.

    PubMed

    Kjaer, Troels W; Sørensen, Helge B

    2013-01-01

    Brain-computer interfaces (BCI) register changes in brain activity and utilize this to control computers. The most widely used method is based on registration of electrical signals from the cerebral cortex using extracranially placed electrodes also called electroencephalography (EEG). The features extracted from the EEG may, besides controlling the computer, also be fed back to the patient for instance as visual input. This facilitates a learning process. BCI allow us to utilize brain activity in the rehabilitation of patients after stroke. The activity of the cerebral cortex varies with the type of movement we imagine, and by letting the patient know the type of brain activity best associated with the intended movement the rehabilitation process may be faster and more efficient. The focus of BCI utilization in medicine has changed in recent years. While we previously focused on devices facilitating communication in the rather few patients with locked-in syndrome, much interest is now devoted to the therapeutic use of BCI in rehabilitation. For this latter group of patients, the device is not intended to be a lifelong assistive companion but rather a 'teacher' during the rehabilitation period. Copyright © 2013 S. Karger AG, Basel.

  1. A link-segment model of upright human posture for analysis of head-trunk coordination

    NASA Technical Reports Server (NTRS)

    Nicholas, S. C.; Doxey-Gasway, D. D.; Paloski, W. H.

    1998-01-01

    Sensory-motor control of upright human posture may be organized in a top-down fashion such that certain head-trunk coordination strategies are employed to optimize visual and/or vestibular sensory inputs. Previous quantitative models of the biomechanics of human posture control have examined the simple case of ankle sway strategy, in which an inverted pendulum model is used, and the somewhat more complicated case of hip sway strategy, in which multisegment, articulated models are used. While these models can be used to quantify the gross dynamics of posture control, they are not sufficiently detailed to analyze head-trunk coordination strategies that may be crucial to understanding its underlying mechanisms. In this paper, we present a biomechanical model of upright human posture that extends an existing four mass, sagittal plane, link-segment model to a five mass model including an independent head link. The new model was developed to analyze segmental body movements during dynamic posturography experiments in order to study head-trunk coordination strategies and their influence on sensory inputs to balance control. It was designed specifically to analyze data collected on the EquiTest (NeuroCom International, Clackamas, OR) computerized dynamic posturography system, where the task of maintaining postural equilibrium may be challenged under conditions in which the visual surround, support surface, or both are in motion. The performance of the model was tested by comparing its estimated ground reaction forces to those measured directly by support surface force transducers. We conclude that this model will be a valuable analytical tool in the search for mechanisms of balance control.

  2. A high performance sensorimotor beta rhythm-based brain computer interface associated with human natural motor behavior

    NASA Astrophysics Data System (ADS)

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Floeter, Mary Kay; Hattori, Noriaki; Hallett, Mark

    2008-03-01

    To explore the reliability of a high performance brain-computer interface (BCI) using non-invasive EEG signals associated with human natural motor behavior does not require extensive training. We propose a new BCI method, where users perform either sustaining or stopping a motor task with time locking to a predefined time window. Nine healthy volunteers, one stroke survivor with right-sided hemiparesis and one patient with amyotrophic lateral sclerosis (ALS) participated in this study. Subjects did not receive BCI training before participating in this study. We investigated tasks of both physical movement and motor imagery. The surface Laplacian derivation was used for enhancing EEG spatial resolution. A model-free threshold setting method was used for the classification of motor intentions. The performance of the proposed BCI was validated by an online sequential binary-cursor-control game for two-dimensional cursor movement. Event-related desynchronization and synchronization were observed when subjects sustained or stopped either motor execution or motor imagery. Feature analysis showed that EEG beta band activity over sensorimotor area provided the largest discrimination. With simple model-free classification of beta band EEG activity from a single electrode (with surface Laplacian derivation), the online classifications of the EEG activity with motor execution/motor imagery were: >90%/~80% for six healthy volunteers, >80%/~80% for the stroke patient and ~90%/~80% for the ALS patient. The EEG activities of the other three healthy volunteers were not classifiable. The sensorimotor beta rhythm of EEG associated with human natural motor behavior can be used for a reliable and high performance BCI for both healthy subjects and patients with neurological disorders. Significance: The proposed new non-invasive BCI method highlights a practical BCI for clinical applications, where the user does not require extensive training.

  3. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.

  4. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  5. Core competencies to prevent and control chronic diseases of Tambol Health Centers' head in Thailand.

    PubMed

    Leerapan, Prasit; Kengganpanich, Tharadol; Sompopcharoen, Malinee

    2012-06-01

    To assess the core competencies to prevent and control chronic diseases of the head of Tambol Health Centers (THC) in Thailand. This cross-sectional survey research was carried out with 2,049 heads of THC from the total population of 9,985. The samples were selected randomly from all provinces of every region. The data were collected through mail questionnaires and the reliability values of the three competency domains questionnaire were found to be between 0.75-0.93. Data analysis was done by computing frequency, percentage, arithmetic mean, Independent's t-test and One-way ANOVA. The total core competency values of prevention and control of diabetes and hypertension of the THC heads were found at the high and moderate level (3.0% and 78.7%) respectively The similar finding was found in the competency domains in regard to "personal attribution", "intellectual capacity" while 8.0 percent and 46.2 percent of the respondents had the high and moderate level of "work skill" domain respectively. In addition, the differences of competency domains were found in accordance with the regions where the THC located, ability to develop a plan for disease prevention and readiness for changing behaviors of the risk groups. But the personal attributions with regard to gender age, family's economic status, and the location of the THC were not found to affect every competency domain. Except for the intellectual capacity domain found that the male THC heads had the higher level than the females and work skill domain of those THC heads working in the municipal areas had the higher level than those who worked outside the municipal areas. Core competencies of the heads of THC in chronic disease prevention and control were found at the "somewhat good" level except for the work skill domain which needed to be developed. Thus, the Ministry of Public Health should establish a specific policy and strategy on human resource development by using core competencies on chronic disease prevention

  6. The brain-computer interface cycle.

    PubMed

    van Gerven, Marcel; Farquhar, Jason; Schaefer, Rebecca; Vlek, Rutger; Geuze, Jeroen; Nijholt, Anton; Ramsey, Nick; Haselager, Pim; Vuurpijl, Louis; Gielen, Stan; Desain, Peter

    2009-08-01

    Brain-computer interfaces (BCIs) have attracted much attention recently, triggered by new scientific progress in understanding brain function and by impressive applications. The aim of this review is to give an overview of the various steps in the BCI cycle, i.e., the loop from the measurement of brain activity, classification of data, feedback to the subject and the effect of feedback on brain activity. In this article we will review the critical steps of the BCI cycle, the present issues and state-of-the-art results. Moreover, we will develop a vision on how recently obtained results may contribute to new insights in neurocognition and, in particular, in the neural representation of perceived stimuli, intended actions and emotions. Now is the right time to explore what can be gained by embracing real-time, online BCI and by adding it to the set of experimental tools already available to the cognitive neuroscientist. We close by pointing out some unresolved issues and present our view on how BCI could become an important new tool for probing human cognition.

  7. Leveraging anatomical information to improve transfer learning in brain-computer interfaces.

    PubMed

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian K C

    2015-08-01

    Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Based on simulated and measured electroencephalography activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time.

  8. Pilot control through the TAFCOS automatic flight control system

    NASA Technical Reports Server (NTRS)

    Wehrend, W. R., Jr.

    1979-01-01

    The set of flight control logic used in a recently completed flight test program to evaluate the total automatic flight control system (TAFCOS) with the controller operating in a fully automatic mode, was used to perform an unmanned simulation on an IBM 360 computer in which the TAFCOS concept was extended to provide a multilevel pilot interface. A pilot TAFCOS interface for direct pilot control by use of a velocity-control-wheel-steering mode was defined as well as a means for calling up conventional autopilot modes. It is concluded that the TAFCOS structure is easily adaptable to the addition of a pilot control through a stick-wheel-throttle control similar to conventional airplane controls. Conventional autopilot modes, such as airspeed-hold, altitude-hold, heading-hold, and flight path angle-hold, can also be included.

  9. Brain-computer interface: changes in performance using virtual reality techniques.

    PubMed

    Ron-Angevin, Ricardo; Díaz-Estrella, Antonio

    2009-01-09

    The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.

  10. Using a cVEP-Based Brain-Computer Interface to Control a Virtual Agent.

    PubMed

    Riechmann, Hannes; Finke, Andrea; Ritter, Helge

    2016-06-01

    Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.

  11. Investigating the Role of Hydrologic Residence Time in Nitrogen Transformations at the Sediment-Water Interface using Controlled Variable Head Experiments

    NASA Astrophysics Data System (ADS)

    Hampton, T. B.; Zarnetske, J. P.; Briggs, M. A.; Singha, K.; Day-Lewis, F. D.

    2017-12-01

    Many important biogeochemical processes governing both carbon and nitrogen dynamics in streams take place at the sediment-water interface (SWI). This interface is highly variable in biogeochemical function, with stream stage often influencing the magnitude and direction of water and solute exchange through the SWI. It is well known that the SWI can be an important location for carbon and nitrogen transformations, including denitrification and greenhouse gas production. The degree of mixing of carbon and nitrate, along with oxygen from surface waters, is strongly influenced by hydrologic exchange at the SWI. We hypothesize that hydrologic residence time, which is also determined by the magnitude of exchange, is a key control on the fate of nitrate at the SWI and on the end products of denitrification. Previous studies in the headwaters of the Ipswich River in MA as part of the Lotic Intersite Nitrogen Experiments (LINX II) and other long-term monitoring suggest that the Ipswich River SWI represents an important source of nitrous oxide, a potent greenhouse gas. Using a novel constant-head infiltrometer ring embedded in the stream sediments, we created four unique controlled down-welling (i.e., recharge) conditions, and tested how varying this hydrologic flux and thus the residence time distribution influenced biogeochemical function of the Ipswich River SWI. Specifically, we added isotopically-labelled 15N-nitrate to stream water during each controlled hydrologic flux experiment to quantify nitrate transformation rates, including denitrification end products, under the different hydrologic conditions. We also measured a suite of carbon and nitrogen solutes, along with dissolved oxygen conditions throughout each experiment to characterize the broader residence timescale and biogeochemical responses to the hydrologic manipulations. Initial results show that the oxic conditions of the SWI were strongly responsive to changes in hydrologic flux rates, thereby changing the

  12. Brain-computer interface technology: a review of the Second International Meeting.

    PubMed

    Vaughan, Theresa M; Heetderks, William J; Trejo, Leonard J; Rymer, William Z; Weinrich, Michael; Moore, Melody M; Kübler, Andrea; Dobkin, Bruce H; Birbaumer, Niels; Donchin, Emanuel; Wolpaw, Elizabeth Winter; Wolpaw, Jonathan R

    2003-06-01

    This paper summarizes the Brain-Computer Interfaces for Communication and Control, The Second International Meeting, held in Rensselaerville, NY, in June 2002. Sponsored by the National Institutes of Health and organized by the Wadsworth Center of the New York State Department of Health, the meeting addressed current work and future plans in brain-computer interface (BCI) research. Ninety-two researchers representing 38 different research groups from the United States, Canada, Europe, and China participated. The BCIs discussed at the meeting use electroencephalographic activity recorded from the scalp or single-neuron activity recorded within cortex to control cursor movement, select letters or icons, or operate neuroprostheses. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI that recognizes the commands contained in the input and expresses them in device control. Current BCIs have maximum information transfer rates of up to 25 b/min. Achievement of greater speed and accuracy requires improvements in signal acquisition and processing, in translation algorithms, and in user training. These improvements depend on interdisciplinary cooperation among neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective criteria for evaluating alternative methods. The practical use of BCI technology will be determined by the development of appropriate applications and identification of appropriate user groups, and will require careful attention to the needs and desires of individual users.

  13. Continuous Three-Dimensional Control of a Virtual Helicopter Using a Motor Imagery Based Brain-Computer Interface

    PubMed Central

    Doud, Alexander J.; Lucas, John P.; Pisansky, Marc T.; He, Bin

    2011-01-01

    Brain-computer interfaces (BCIs) allow a user to interact with a computer system using thought. However, only recently have devices capable of providing sophisticated multi-dimensional control been achieved non-invasively. A major goal for non-invasive BCI systems has been to provide continuous, intuitive, and accurate control, while retaining a high level of user autonomy. By employing electroencephalography (EEG) to record and decode sensorimotor rhythms (SMRs) induced from motor imaginations, a consistent, user-specific control signal may be characterized. Utilizing a novel method of interactive and continuous control, we trained three normal subjects to modulate their SMRs to achieve three-dimensional movement of a virtual helicopter that is fast, accurate, and continuous. In this system, the virtual helicopter's forward-backward translation and elevation controls were actuated through the modulation of sensorimotor rhythms that were converted to forces applied to the virtual helicopter at every simulation time step, and the helicopter's angle of left or right rotation was linearly mapped, with higher resolution, from sensorimotor rhythms associated with other motor imaginations. These different resolutions of control allow for interplay between general intent actuation and fine control as is seen in the gross and fine movements of the arm and hand. Subjects controlled the helicopter with the goal of flying through rings (targets) randomly positioned and oriented in a three-dimensional space. The subjects flew through rings continuously, acquiring as many as 11 consecutive rings within a five-minute period. In total, the study group successfully acquired over 85% of presented targets. These results affirm the effective, three-dimensional control of our motor imagery based BCI system, and suggest its potential applications in biological navigation, neuroprosthetics, and other applications. PMID:22046274

  14. Human factors with nonhumans - Factors that affect computer-task performance

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.

  15. Comparison of tongue interface with keyboard for control of an assistive robotic arm.

    PubMed

    Struijk, Lotte N S Andreasen; Lontis, Romulus

    2017-07-01

    This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.

  16. An independent brain-computer interface using covert non-spatial visual selective attention

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  17. An independent brain-computer interface using covert non-spatial visual selective attention.

    PubMed

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  18. Formalisms for user interface specification and design

    NASA Technical Reports Server (NTRS)

    Auernheimer, Brent J.

    1989-01-01

    The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.

  19. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  20. TMS communications software. Volume 1: Computer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, J. S.; Lenker, M. D.

    1979-01-01

    A prototype bus communications system, which is being used to support the Trend Monitoring System (TMS) as well as for evaluation of the bus concept is considered. Hardware and software interfaces to the MODCOMP and NOVA minicomputers are included. The system software required to drive the interfaces in each TMS computer is described. Documentation of other software for bus statistics monitoring and for transferring files across the bus is also included.

  1. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  2. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces

    PubMed Central

    Bocquelet, Florent; Hueber, Thomas; Girin, Laurent; Savariaux, Christophe; Yvert, Blaise

    2016-01-01

    Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer. PMID:27880768

  3. The applicability of a computer model for predicting head injury incurred during actual motor vehicle collisions.

    PubMed

    Moran, Stephan G; Key, Jason S; McGwin, Gerald; Keeley, Jason W; Davidson, James S; Rue, Loring W

    2004-07-01

    Head injury is a significant cause of both morbidity and mortality. Motor vehicle collisions (MVCs) are the most common source of head injury in the United States. No studies have conclusively determined the applicability of computer models for accurate prediction of head injuries sustained in actual MVCs. This study sought to determine the applicability of such models for predicting head injuries sustained by MVC occupants. The Crash Injury Research and Engineering Network (CIREN) database was queried for restrained drivers who sustained a head injury. These collisions were modeled using occupant dynamic modeling (MADYMO) software, and head injury scores were generated. The computer-generated head injury scores then were evaluated with respect to the actual head injuries sustained by the occupants to determine the applicability of MADYMO computer modeling for predicting head injury. Five occupants meeting the selection criteria for the study were selected from the CIREN database. The head injury scores generated by MADYMO were lower than expected given the actual injuries sustained. In only one case did the computer analysis predict a head injury of a severity similar to that actually sustained by the occupant. Although computer modeling accurately simulates experimental crash tests, it may not be applicable for predicting head injury in actual MVCs. Many complicating factors surrounding actual MVCs make accurate computer modeling difficult. Future modeling efforts should consider variables such as age of the occupant and should account for a wider variety of crash scenarios.

  4. A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.

    PubMed

    Hocraffer, Amy; Nam, Chang S

    2017-01-01

    A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Brain-computer interface controlled functional electrical stimulation device for foot drop due to stroke.

    PubMed

    Do, An H; Wang, Po T; King, Christine E; Schombs, Andrew; Cramer, Steven C; Nenadic, Zoran

    2012-01-01

    Gait impairment due to foot drop is a common outcome of stroke, and current physiotherapy provides only limited restoration of gait function. Gait function can also be aided by orthoses, but these devices may be cumbersome and their benefits disappear upon removal. Hence, new neuro-rehabilitative therapies are being sought to generate permanent improvements in motor function beyond those of conventional physiotherapies through positive neural plasticity processes. Here, the authors describe an electroencephalogram (EEG) based brain-computer interface (BCI) controlled functional electrical stimulation (FES) system that enabled a stroke subject with foot drop to re-establish foot dorsiflexion. To this end, a prediction model was generated from EEG data collected as the subject alternated between periods of idling and attempted foot dorsiflexion. This prediction model was then used to classify online EEG data into either "idling" or "dorsiflexion" states, and this information was subsequently used to control an FES device to elicit effective foot dorsiflexion. The performance of the system was assessed in online sessions, where the subject was prompted by a computer to alternate between periods of idling and dorsiflexion. The subject demonstrated purposeful operation of the BCI-FES system, with an average cross-correlation between instructional cues and BCI-FES response of 0.60 over 3 sessions. In addition, analysis of the prediction model indicated that non-classical brain areas were activated in the process, suggesting post-stroke cortical re-organization. In the future, these systems may be explored as a potential therapeutic tool that can help promote positive plasticity and neural repair in chronic stroke patients.

  6. Universal computer control system (UCCS) for space telerobots

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.

  7. Self-paced brain-computer interface control of ambulation in a virtual reality environment.

    PubMed

    Wang, Po T; King, Christine E; Chui, Luis A; Do, An H; Nenadic, Zoran

    2012-10-01

    Spinal cord injury (SCI) often leaves affected individuals unable to ambulate. Electroencephalogram (EEG) based brain-computer interface (BCI) controlled lower extremity prostheses may restore intuitive and able-body-like ambulation after SCI. To test its feasibility, the authors developed and tested a novel EEG-based, data-driven BCI system for intuitive and self-paced control of the ambulation of an avatar within a virtual reality environment (VRE). Eight able-bodied subjects and one with SCI underwent the following 10-min training session: subjects alternated between idling and walking kinaesthetic motor imageries (KMI) while their EEG were recorded and analysed to generate subject-specific decoding models. Subjects then performed a goal-oriented online task, repeated over five sessions, in which they utilized the KMI to control the linear ambulation of an avatar and make ten sequential stops at designated points within the VRE. The average offline training performance across subjects was 77.2 ± 11.0%, ranging from 64.3% (p = 0.001 76) to 94.5% (p = 6.26 × 10(-23)), with chance performance being 50%. The average online performance was 8.5 ± 1.1 (out of 10) successful stops and 303 ± 53 s completion time (perfect = 211 s). All subjects achieved performances significantly different than those of random walk (p < 0.05) in 44 of the 45 online sessions. By using a data-driven machine learning approach to decode users' KMI, this BCI-VRE system enabled intuitive and purposeful self-paced control of ambulation after only 10 minutes training. The ability to achieve such BCI control with minimal training indicates that the implementation of future BCI-lower extremity prosthesis systems may be feasible.

  8. Paralyzed subject controls telepresence mobile robot using novel sEMG brain-computer interface: case study.

    PubMed

    Lyons, Kenneth R; Joshi, Sanjay S

    2013-06-01

    Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.

  9. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  10. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  11. Gloved Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  12. A hybrid brain-computer interface-based mail client.

    PubMed

    Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng

    2013-01-01

    Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.

  13. A Hybrid Brain-Computer Interface-Based Mail Client

    PubMed Central

    Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng

    2013-01-01

    Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method. PMID:23690880

  14. An Investment Behavior Analysis using by Brain Computer Interface

    NASA Astrophysics Data System (ADS)

    Suzuki, Kyoko; Kinoshita, Kanta; Miyagawa, Kazuhiro; Shiomi, Shinichi; Misawa, Tadanobu; Shimokawa, Tetsuya

    In this paper, we will construct a new Brain Computer Interface (BCI), for the purpose of analyzing human's investment decision makings. The BCI is made up of three functional parts which take roles of, measuring brain information, determining market price in an artificial market, and specifying investment decision model, respectively. When subjects make decisions, their brain information is conveyed to the part of specifying investment decision model through the part of measuring brain information, whereas, their decisions of investment order are sent to the part of artificial market to form market prices. Both the support vector machine and the 3 layered perceptron are used to assess the investment decision model. In order to evaluate our BCI, we conduct an experiment in which subjects and a computer trader agent trade shares of stock in the artificial market and test how the computer trader agent can forecast market price formation and investment decision makings from the brain information of subjects. The result of the experiment shows that the brain information can improve the accuracy of forecasts, and so the computer trader agent can supply market liquidity to stabilize market volatility without his loss.

  15. Effects of Soft Drinks on Resting State EEG and Brain-Computer Interface Performance.

    PubMed

    Meng, Jianjun; Mundahl, John; Streitz, Taylor; Maile, Kaitlin; Gulachek, Nicholas; He, Jeffrey; He, Bin

    2017-01-01

    Motor imagery-based (MI based) brain-computer interface (BCI) using electroencephalography (EEG) allows users to directly control a computer or external device by modulating and decoding the brain waves. A variety of factors could potentially affect the performance of BCI such as the health status of subjects or the environment. In this study, we investigated the effects of soft drinks and regular coffee on EEG signals under resting state and on the performance of MI based BCI. Twenty-six healthy human subjects participated in three or four BCI sessions with a resting period in each session. During each session, the subjects drank an unlabeled soft drink with either sugar (Caffeine Free Coca-Cola), caffeine (Diet Coke), neither ingredient (Caffeine Free Diet Coke), or a regular coffee if there was a fourth session. The resting state spectral power in each condition was compared; the analysis showed that power in alpha and beta band after caffeine consumption were decreased substantially compared to control and sugar condition. Although the attenuation of powers in the frequency range used for the online BCI control signal was shown, group averaged BCI online performance after consuming caffeine was similar to those of other conditions. This work, for the first time, shows the effect of caffeine, sugar intake on the online BCI performance and resting state brain signal.

  16. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  17. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    PubMed

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  18. Multi-robot control interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruemmer, David J; Walton, Miles C

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less

  19. Experimental study of heavy-ion computed tomography using a scintillation screen and an electron-multiplying charged coupled device camera for human head imaging

    NASA Astrophysics Data System (ADS)

    Muraishi, Hiroshi; Hara, Hidetake; Abe, Shinji; Yokose, Mamoru; Watanabe, Takara; Takeda, Tohoru; Koba, Yusuke; Fukuda, Shigekazu

    2016-03-01

    We have developed a heavy-ion computed tomography (IonCT) system using a scintillation screen and an electron-multiplying charged coupled device (EMCCD) camera that can measure a large object such as a human head. In this study, objective with the development of the system was to investigate the possibility of applying this system to heavy-ion treatment planning from the point of view of spatial resolution in a reconstructed image. Experiments were carried out on a rotation phantom using 12C accelerated up to 430 MeV/u by the Heavy-Ion Medical Accelerator in Chiba (HIMAC) at the National Institute of Radiological Sciences (NIRS). We demonstrated that the reconstructed image of an object with a water equivalent thickness (WET) of approximately 18 cm was successfully achieved with the spatial resolution of 1 mm, which would make this IonCT system worth applying to the heavy-ion treatment planning for head and neck cancers.

  20. Gas cushion control of OVJP print head position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrest, Stephen R

    An OVJP apparatus and method for applying organic vapor or other flowable material to a substrate using a printing head mechanism in which the print head spacing from the substrate is controllable using a cushion of air or other gas applied between the print head and substrate. The print head is mounted for translational movement towards and away from the substrate and is biased toward the substrate by springs or other means. A gas cushion feed assembly supplies a gas under pressure between the print head and substrate which opposes the biasing of the print head toward the substrate somore » as to form a space between the print head and substrate. By controlling the pressure of gas supplied, the print head separation from the substrate can be precisely controlled.« less

  1. Computational structure analysis of biomacromolecule complexes by interface geometry.

    PubMed

    Mahdavi, Sedigheh; Salehzadeh-Yazdi, Ali; Mohades, Ali; Masoudi-Nejad, Ali

    2013-12-01

    The ability to analyze and compare protein-nucleic acid and protein-protein interaction interface has critical importance in understanding the biological function and essential processes occurring in the cells. Since high-resolution three-dimensional (3D) structures of biomacromolecule complexes are available, computational characterizing of the interface geometry become an important research topic in the field of molecular biology. In this study, the interfaces of a set of 180 protein-nucleic acid and protein-protein complexes are computed to understand the principles of their interactions. The weighted Voronoi diagram of the atoms and the Alpha complex has provided an accurate description of the interface atoms. Our method is implemented in the presence and absence of water molecules. A comparison among the three types of interaction interfaces show that RNA-protein complexes have the largest size of an interface. The results show a high correlation coefficient between our method and the PISA server in the presence and absence of water molecules in the Voronoi model and the traditional model based on solvent accessibility and the high validation parameters in comparison to the classical model. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A novel method for intraoral access to the superior head of the human lateral pterygoid muscle.

    PubMed

    Oliveira, Aleli Tôrres; Camilo, Anderson Aparecido; Bahia, Paulo Roberto Valle; Carvalho, Antonio Carlos Pires; DosSantos, Marcos Fabio; da Silva, Jorge Vicente Lopes; Monteiro, André Antonio

    2014-01-01

    The uncoordinated activity of the superior and inferior parts of the lateral pterygoid muscle (LPM) has been suggested to be one of the causes of temporomandibular joint (TMJ) disc displacement. A therapy for this muscle disorder is the injection of botulinum toxin (BTX), of the LPM. However, there is a potential risk of side effects with the injection guide methods currently available. In addition, they do not permit appropriate differentiation between the two bellies of the muscle. Herein, a novel method is presented to provide intraoral access to the superior head of the human LPM with maximal control and minimal hazards. Computational tomography along with digital imaging software programs and rapid prototyping techniques were used to create a rapid prototyped guide to orient BTX injections in the superior LPM. The method proved to be feasible and reliable. Furthermore, when tested in one volunteer it allowed precise access to the upper head of LPM, without producing side effects. The prototyped guide presented in this paper is a novel tool that provides intraoral access to the superior head of the LPM. Further studies will be necessary to test the efficacy and validate this method in a larger cohort of subjects.

  3. Digital Interface Board to Control Phase and Amplitude of Four Channels

    NASA Technical Reports Server (NTRS)

    Smith, Amy E.; Cook, Brian M.; Khan, Abdur R.; Lux, James P.

    2011-01-01

    An increasing number of parts are designed with digital control interfaces, including phase shifters and variable attenuators. When designing an antenna array in which each antenna has independent amplitude and phase control, the number of digital control lines that must be set simultaneously can grow very large. Use of a parallel interface would require separate line drivers, more parts, and thus additional failure points. A convenient form of control where single-phase shifters or attenuators could be set or the whole set could be programmed with an update rate of 100 Hz is needed to solve this problem. A digital interface board with a field-programmable gate array (FPGA) can simultaneously control an essentially arbitrary number of digital control lines with a serial command interface requiring only three wires. A small set of short, high-level commands provides a simple programming interface for an external controller. Parity bits are used to validate the control commands. Output timing is controlled within the FPGA to allow for rapid update rates of the phase shifters and attenuators. This technology has been used to set and monitor eight 5-bit control signals via a serial UART (universal asynchronous receiver/transmitter) interface. The digital interface board controls the phase and amplitude of the signals for each element in the array. A host computer running Agilent VEE sends commands via serial UART connection to a Xilinx VirtexII FPGA. The commands are decoded, and either outputs are set or telemetry data is sent back to the host computer describing the status and the current phase and amplitude settings. This technology is an integral part of a closed-loop system in which the angle of arrival of an X-band uplink signal is detected and the appropriate phase shifts are applied to the Ka-band downlink signal to electronically steer the array back in the direction of the uplink signal. It will also be used in the non-beam-steering case to compensate for

  4. Evaluation of a modified Fitts law brain-computer interface target acquisition task in able and motor disabled individuals

    NASA Astrophysics Data System (ADS)

    Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.

    2009-10-01

    A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.

  5. Feedback Controlled Colloidal Assembly at Fluid Interfaces

    NASA Astrophysics Data System (ADS)

    Bevan, Michael

    The autonomous and reversible assembly of colloidal nano- and micro- scale components into ordered configurations is often suggested as a scalable process capable of manufacturing meta-materials with exotic electromagnetic properties. As a result, there is strong interest in understanding how thermal motion, particle interactions, patterned surfaces, and external fields can be optimally coupled to robustly control the assembly of colloidal components into hierarchically structured functional meta-materials. We approach this problem by directly relating equilibrium and dynamic colloidal microstructures to kT-scale energy landscapes mediated by colloidal forces, physically and chemically patterned surfaces, multiphase fluid interfaces, and electromagnetic fields. 3D colloidal trajectories are measured in real-space and real-time with nanometer resolution using an integrated suite of evanescent wave, video, and confocal microscopy methods. Equilibrium structures are connected to energy landscapes via statistical mechanical models. The dynamic evolution of initially disordered colloidal fluid configurations into colloidal crystals in the presence of tunable interactions (electromagnetic field mediated interactions, particle-interface interactions) is modeled using a novel approach based on fitting the Fokker-Planck equation to experimental microscopy and computer simulated assembly trajectories. This approach is based on the use of reaction coordinates that capture important microstructural features of crystallization processes and quantify both statistical mechanical (free energy) and fluid mechanical (hydrodynamic) contributions. Ultimately, we demonstrate real-time control of assembly, disassembly, and repair of colloidal crystals using both open loop and closed loop control to produce perfectly ordered colloidal microstructures. This approach is demonstrated for close packed colloidal crystals of spherical particles at fluid-solid interfaces and is being extended

  6. The cortical mouse: a piece of forgotten history in noninvasive brain–computer interfaces.

    PubMed

    Principe, Jose C

    2013-07-01

    Early research on brain-computer interfaces (BCIs) was fueled by the study of event-related potentials (ERPs) by Farwell and Donchin, who are rightly credited for laying important groundwork for the BCI field. However, many other researchers have made substantial contributions that have escaped the radar screen of the current BCI community. For example, in the late 1980s, I worked with a brilliant multidisciplinary research group in electrical engineering at the University of Florida, Gainesville, headed by Dr. Donald Childers. Childers should be well known to long-time members of the IEEE Engineering in Medicine and Biology Society since he was the editor-in-chief of IEEE Transactions on Biomedical Engineering in the 1970s and the recipient of one of the most prestigious society awards, the William J. Morlock Award, in 1973.

  7. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  8. Quantification of visual clutter using a computation model of human perception : an application for head-up displays

    DOT National Transportation Integrated Search

    2004-03-20

    A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...

  9. Asynchronous P300-based brain-computer interface to control a virtual environment: initial tests on end users.

    PubMed

    Aloise, Fabio; Schettini, Francesca; Aricò, Pietro; Salinari, Serenella; Guger, Christoph; Rinsma, Johanna; Aiello, Marco; Mattia, Donatella; Cincotti, Febo

    2011-10-01

    Motor disability and/or ageing can prevent individuals from fully enjoying home facilities, thus worsening their quality of life. Advances in the field of accessible user interfaces for domotic appliances can represent a valuable way to improve the independence of these persons. An asynchronous P300-based Brain-Computer Interface (BCI) system was recently validated with the participation of healthy young volunteers for environmental control. In this study, the asynchronous P300-based BCI for the interaction with a virtual home environment was tested with the participation of potential end-users (clients of a Frisian home care organization) with limited autonomy due to ageing and/or motor disabilities. System testing revealed that the minimum number of stimulation sequences needed to achieve correct classification had a higher intra-subject variability in potential end-users with respect to what was previously observed in young controls. Here we show that the asynchronous modality performed significantly better as compared to the synchronous mode in continuously adapting its speed to the users' state. Furthermore, the asynchronous system modality confirmed its reliability in avoiding misclassifications and false positives, as previously shown in young healthy subjects. The asynchronous modality may contribute to filling the usability gap between BCI systems and traditional input devices, representing an important step towards their use in the activities of daily living.

  10. Electrostatics with Computer-Interfaced Charge Sensors

    ERIC Educational Resources Information Center

    Morse, Robert A.

    2006-01-01

    Computer interfaced electrostatic charge sensors allow both qualitative and quantitative measurements of electrostatic charge but are quite sensitive to charges accumulating on modern synthetic materials. They need to be used with care so that students can correctly interpret their measurements. This paper describes the operation of the sensors,…

  11. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  12. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  13. Leveraging anatomical information to improve transfer learning in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian K. C.

    2015-08-01

    Objective. Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. Approach. We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Main results. Based on simulated and measured electroencephalography activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. Significance. These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time.

  14. Leveraging anatomical information to improve transfer learning in brain-computer interfaces

    PubMed Central

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian KC

    2015-01-01

    Objective Brain-computer interfaces (BCIs) represent a technology with the potential to rehabilitate a range of traumatic and degenerative nervous system conditions but require a time-consuming training process to calibrate. An area of BCI research known as transfer learning is aimed at accelerating training by recycling previously recorded training data across sessions or subjects. Training data, however, is typically transferred from one electrode configuration to another without taking individual head anatomy or electrode positioning into account, which may underutilize the recycled data. Approach We explore transfer learning with the use of source imaging, which estimates neural activity in the cortex. Transferring estimates of cortical activity, in contrast to scalp recordings, provides a way to compensate for variability in electrode positioning and head morphologies across subjects and sessions. Main Results Based on simulated and measured EEG activity, we trained a classifier using data transferred exclusively from other subjects and achieved accuracies that were comparable to or surpassed a benchmark classifier (representative of a real-world BCI). Our results indicate that classification improvements depend on the number of trials transferred and the cortical region of interest. Significance These findings suggest that cortical source-based transfer learning is a principled method to transfer data that improves BCI classification performance and provides a path to reduce BCI calibration time. PMID:26169961

  15. Neuroanatomical correlates of brain-computer interface performance.

    PubMed

    Kasahara, Kazumi; DaSalla, Charles Sayo; Honda, Manabu; Hanakawa, Takashi

    2015-04-15

    Brain-computer interfaces (BCIs) offer a potential means to replace or restore lost motor function. However, BCI performance varies considerably between users, the reasons for which are poorly understood. Here we investigated the relationship between sensorimotor rhythm (SMR)-based BCI performance and brain structure. Participants were instructed to control a computer cursor using right- and left-hand motor imagery, which primarily modulated their left- and right-hemispheric SMR powers, respectively. Although most participants were able to control the BCI with success rates significantly above chance level even at the first encounter, they also showed substantial inter-individual variability in BCI success rate. Participants also underwent T1-weighted three-dimensional structural magnetic resonance imaging (MRI). The MRI data were subjected to voxel-based morphometry using BCI success rate as an independent variable. We found that BCI performance correlated with gray matter volume of the supplementary motor area, supplementary somatosensory area, and dorsal premotor cortex. We suggest that SMR-based BCI performance is associated with development of non-primary somatosensory and motor areas. Advancing our understanding of BCI performance in relation to its neuroanatomical correlates may lead to better customization of BCIs based on individual brain structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Evaluation of head orientation and neck muscle EMG signals as three-dimensional command sources.

    PubMed

    Williams, Matthew R; Kirsch, Robert F

    2015-03-05

    High cervical spinal cord injuries result in significant functional impairments and affect both the injured individual as well as their family and care givers. To help restore function to these individuals, multiple user interfaces are available to enable command and control of external devices. However, little work has been performed to assess the 3D performance of these interfaces. We investigated the performance of eight human subjects in using three user interfaces (head orientation, EMG from muscles of the head and neck, and a three-axis joystick) to command the endpoint position of a multi-axis robotic arm within a 3D workspace to perform a novel out-to-center 3D Fitts' Law style task. Two of these interfaces (head orientation, EMG from muscles of the head and neck) could realistically be used by individuals with high tetraplegia, while the joystick was evaluated as a standard of high performance. Performance metrics were developed to assess the aspects of command source performance. Data were analyzed using a mixed model design ANOVA. Fixed effects were investigated between sources as well as for interactions between index of difficulty, command source, and the five performance measures used. A 5% threshold for statistical significance was used in the analysis. The performances of the three command interfaces were rather similar, though significant differences between command sources were observed. The apparent similarity is due in large part to the sequential command strategy (i.e., one dimension of movement at a time) typically adopted by the subjects. EMG-based commands were particularly pulsatile in nature. The use of sequential commands had a significant impact on each command source's performance for movements in two or three dimensions. While the sequential nature of the commands produced by the user did not fit with Fitts' Law, the other performance measures used were able to illustrate the properties of each command source. Though pulsatile, given

  17. Experimental conical-head abutment screws on the microbial leakage through the implant-abutment interface: an in vitro analysis using target-specific DNA probes.

    PubMed

    Pita, Murillo S; do Nascimento, Cássio; Dos Santos, Carla G P; Pires, Isabela M; Pedrazzi, Vinícius

    2017-07-01

    The aim of this in vitro study was to identify and quantify up to 38 microbial species from human saliva penetrating through the implant-abutment interface in two different implant connections, external hexagon and tri-channel internal connection, both with conventional flat-head or experimental conical-head abutment screws. Forty-eight two-part implants with external hexagon (EH; n = 24) or tri-channel internal (TI; n = 24) connections were investigated. Abutments were attached to implants with conventional flat-head or experimental conical-head screws. After saliva incubation, Checkerboard DNA-DNA hybridization was used to identify and quantify up to 38 bacterial colonizing the internal parts of the implants. Kruskal-Wallis test followed by Bonferroni's post-tests for multiple comparisons was used for statistical analysis. Twenty-four of thirty-eight species, including putative periodontal pathogens, were found colonizing the inner surfaces of both EH and TI implants. Peptostreptococcus anaerobios (P = 0.003), Prevotella melaninogenica (P < 0.0001), and Candida dubliniensis (P < 0.0001) presented significant differences between different groups. Means of total microbial count (×10 4 , ±SD) for each group were recorded as follows: G1 (0.27 ± 2.04), G2 (0 ± 0), G3 (1.81 ± 7.50), and G4 (0.35 ± 1.81). Differences in the geometry of implant connections and abutment screws have impacted the microbial leakage through the implant-abutment interface. Implants attached with experimental conical-head abutment screws showed lower counts of microorganisms when compared with conventional flat-head screws. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  19. An auditory brain-computer interface evoked by natural speech

    NASA Astrophysics Data System (ADS)

    Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto

    2012-06-01

    Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.

  20. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  1. A dictionary learning approach for human sperm heads classification.

    PubMed

    Shaker, Fariba; Monadjemi, S Amirhassan; Alirezaie, Javad; Naghsh-Nilchi, Ahmad Reza

    2017-12-01

    To diagnose infertility in men, semen analysis is conducted in which sperm morphology is one of the factors that are evaluated. Since manual assessment of sperm morphology is time-consuming and subjective, automatic classification methods are being developed. Automatic classification of sperm heads is a complicated task due to the intra-class differences and inter-class similarities of class objects. In this research, a Dictionary Learning (DL) technique is utilized to construct a dictionary of sperm head shapes. This dictionary is used to classify the sperm heads into four different classes. Square patches are extracted from the sperm head images. Columnized patches from each class of sperm are used to learn class-specific dictionaries. The patches from a test image are reconstructed using each class-specific dictionary and the overall reconstruction error for each class is used to select the best matching class. Average accuracy, precision, recall, and F-score are used to evaluate the classification method. The method is evaluated using two publicly available datasets of human sperm head shapes. The proposed DL based method achieved an average accuracy of 92.2% on the HuSHeM dataset, and an average recall of 62% on the SCIAN-MorphoSpermGS dataset. The results show a significant improvement compared to a previously published shape-feature-based method. We have achieved high-performance results. In addition, our proposed approach offers a more balanced classifier in which all four classes are recognized with high precision and recall. In this paper, we use a Dictionary Learning approach in classifying human sperm heads. It is shown that the Dictionary Learning method is far more effective in classifying human sperm heads than classifiers using shape-based features. Also, a dataset of human sperm head shapes is introduced to facilitate future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Developing the human-computer interface for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Holden, Kritina L.

    1991-01-01

    For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.

  3. The computational structural mechanics testbed architecture. Volume 2: The interface

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    This is the third set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 3 describes the CLIP-Processor interface and related topics. It is intended only for processor developers.

  4. Brain-computer interface controlled functional electrical stimulation system for ankle movement.

    PubMed

    Do, An H; Wang, Po T; King, Christine E; Abiri, Ahmad; Nenadic, Zoran

    2011-08-26

    Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI) is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77) with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions), and one subject had a single false alarm. This study suggests that the integration of a noninvasive BCI with a lower-extremity FES system is feasible. With additional modifications

  5. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    PubMed Central

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  6. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  7. Triple redundant computer system/display and keyboard subsystem interface

    NASA Technical Reports Server (NTRS)

    Gulde, F. J.

    1973-01-01

    Interfacing of the redundant display and keyboard subsystem with the triple redundant computer system is defined according to space shuttle design. The study is performed in three phases: (1) TRCS configuration and characteristics identification; (2) display and keyboard subsystem configuration and characteristics identification, and (3) interface approach definition.

  8. Making IBM's Computer, Watson, Human

    PubMed Central

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial “self,” are all incidental to its humanity. PMID:22942530

  9. Making IBM's Computer, Watson, Human.

    PubMed

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial "self," are all incidental to its humanity.

  10. Evaluation of a Dry EEG System for Application of Passive Brain-Computer Interfaces in Autonomous Driving

    PubMed Central

    Zander, Thorsten O.; Andreessen, Lena M.; Berg, Angela; Bleuel, Maurice; Pawlitzki, Juliane; Zawallich, Lars; Krol, Laurens R.; Gramann, Klaus

    2017-01-01

    We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort. PMID:28293184

  11. Evaluation of a Dry EEG System for Application of Passive Brain-Computer Interfaces in Autonomous Driving.

    PubMed

    Zander, Thorsten O; Andreessen, Lena M; Berg, Angela; Bleuel, Maurice; Pawlitzki, Juliane; Zawallich, Lars; Krol, Laurens R; Gramann, Klaus

    2017-01-01

    We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort.

  12. [The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].

    PubMed

    Ganin, I P; Kaplan, A Ia

    2014-01-01

    The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.

  13. A passive brain-computer interface application for the mental workload assessment on professional air traffic controllers during realistic air traffic control tasks.

    PubMed

    Aricò, P; Borghini, G; Di Flumeri, G; Colosimo, A; Pozzi, S; Babiloni, F

    2016-01-01

    In the last decades, it has been a fast-growing concept in the neuroscience field. The passive brain-computer interface (p-BCI) systems allow to improve the human-machine interaction (HMI) in operational environments, by using the covert brain activity (eg, mental workload) of the operator. However, p-BCI technology could suffer from some practical issues when used outside the laboratories. In particular, one of the most important limitations is the necessity to recalibrate the p-BCI system each time before its use, to avoid a significant reduction of its reliability in the detection of the considered mental states. The objective of the proposed study was to provide an example of p-BCIs used to evaluate the users' mental workload in a real operational environment. For this purpose, through the facilities provided by the École Nationale de l'Aviation Civile of Toulouse (France), the cerebral activity of 12 professional air traffic control officers (ATCOs) has been recorded while performing high realistic air traffic management scenarios. By the analysis of the ATCOs' brain activity (electroencephalographic signal-EEG) and the subjective workload perception (instantaneous self-assessment) provided by both the examined ATCOs and external air traffic control experts, it has been possible to estimate and evaluate the variation of the mental workload under which the controllers were operating. The results showed (i) a high significant correlation between the neurophysiological and the subjective workload assessment, and (ii) a high reliability over time (up to a month) of the proposed algorithm that was also able to maintain high discrimination accuracies by using a low number of EEG electrodes (~3 EEG channels). In conclusion, the proposed methodology demonstrated the suitability of p-BCI systems in operational environments and the advantages of the neurophysiological measures with respect to the subjective ones. © 2016 Elsevier B.V. All rights reserved.

  14. Applications of airborne ultrasound in human-computer interaction.

    PubMed

    Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre

    2014-09-01

    Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.

  15. A Closed-loop Brain Computer Interface to a Virtual Reality Avatar: Gait Adaptation to Visual Kinematic Perturbations

    PubMed Central

    Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.

    2016-01-01

    The control of human bipedal locomotion is of great interest to the field of lower-body brain computer interfaces (BCIs) for rehabilitation of gait. While the feasibility of a closed-loop BCI system for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a virtual reality (BCI-VR) environment has yet to be demonstrated. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control the walking movements of a virtual avatar. Moreover, virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. These findings have implications for the development of BCI-VR systems for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI system. PMID:27713915

  16. Final Report: MaRSPlus Sensor System Electrical Cable Management and Distributed Motor Control Computer Interface

    NASA Technical Reports Server (NTRS)

    Reil, Robin

    2011-01-01

    The success of JPL's Next Generation Imaging Spectrometer (NGIS) in Earth remote sensing has inspired a follow-on instrument project, the MaRSPlus Sensor System (MSS). One of JPL's responsibilities in the MSS project involves updating the documentation from the previous JPL airborne imagers to provide all the information necessary for an outside customer to operate the instrument independently. As part of this documentation update, I created detailed electrical cabling diagrams to provide JPL technicians with clear and concise build instructions and a database to track the status of cables from order to build to delivery. Simultaneously, a distributed motor control system is being developed for potential use on the proposed 2018 Mars rover mission. This system would significantly reduce the mass necessary for rover motor control, making more mass space available to other important spacecraft systems. The current stage of the project consists of a desktop computer talking to a single "cold box" unit containing the electronics to drive a motor. In order to test the electronics, I developed a graphical user interface (GUI) using MATLAB to allow a user to send simple commands to the cold box and display the responses received in a user-friendly format.

  17. A new perspective on how humans assess their surroundings; derivation of head orientation and its role in ‘framing’ the environment

    PubMed Central

    Wilson, Gwendoline Ixia; Holton, Mark D.; Walker, James; Jones, Mark W.; Grundy, Ed; Davies, Ian M.; Clarke, David; Luckman, Adrian; Russill, Nick; Wilson, Vianney; Plummer, Rosie

    2015-01-01

    Understanding the way humans inform themselves about their environment is pivotal in helping explain our susceptibility to stimuli and how this modulates behaviour and movement patterns. We present a new device, the Human Interfaced Personal Observation Platform (HIPOP), which is a head-mounted (typically on a hat) unit that logs magnetometry and accelerometry data at high rates and, following appropriate calibration, can be used to determine the heading and pitch of the wearer’s head. We used this device on participants visiting a botanical garden and noted that although head pitch ranged between −80° and 60°, 25% confidence limits were restricted to an arc of about 25° with a tendency for the head to be pitched down (mean head pitch ranged between −43° and 0°). Mean rates of change of head pitch varied between −0.00187°/0.1 s and 0.00187°/0.1 s, markedly slower than rates of change of head heading which varied between −0.3141°/0.1 s and 0.01263°/0.1 s although frequency distributions of both parameters showed them to be symmetrical and monomodal. Overall, there was considerable variation in both head pitch and head heading, which highlighted the role that head orientation might play in exposing people to certain features of the environment. Thus, when used in tandem with accurate position-determining systems, the HIPOP can be used to determine how the head is orientated relative to gravity and geographic North and in relation to geographic position, presenting data on how the environment is being ‘framed’ by people in relation to environmental content. PMID:26157643

  18. Transferring brain-computer interfaces beyond the laboratory: successful application control for motor-disabled users.

    PubMed

    Leeb, Robert; Perdikis, Serafeim; Tonin, Luca; Biasiucci, Andrea; Tavella, Michele; Creatura, Marco; Molina, Alberto; Al-Khodairy, Abdul; Carlson, Tom; Millán, José D R

    2013-10-01

    Brain-computer interfaces (BCIs) are no longer only used by healthy participants under controlled conditions in laboratory environments, but also by patients and end-users, controlling applications in their homes or clinics, without the BCI experts around. But are the technology and the field mature enough for this? Especially the successful operation of applications - like text entry systems or assistive mobility devices such as tele-presence robots - requires a good level of BCI control. How much training is needed to achieve such a level? Is it possible to train naïve end-users in 10 days to successfully control such applications? In this work, we report our experiences of training 24 motor-disabled participants at rehabilitation clinics or at the end-users' homes, without BCI experts present. We also share the lessons that we have learned through transferring BCI technologies from the lab to the user's home or clinics. The most important outcome is that 50% of the participants achieved good BCI performance and could successfully control the applications (tele-presence robot and text-entry system). In the case of the tele-presence robot the participants achieved an average performance ratio of 0.87 (max. 0.97) and for the text entry application a mean of 0.93 (max. 1.0). The lessons learned and the gathered user feedback range from pure BCI problems (technical and handling), to common communication issues among the different people involved, and issues encountered while controlling the applications. The points raised in this paper are very widely applicable and we anticipate that they might be faced similarly by other groups, if they move on to bringing the BCI technology to the end-user, to home environments and towards application prototype control. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Man-machine interface for the control of a lunar transport machine

    NASA Technical Reports Server (NTRS)

    Ashley, Richard; Bacon, Loring; Carlton, Scott Tim; May, Mark; Moore, Jimmy; Peek, Dennis

    1987-01-01

    A proposed first generation human interface control panel is described which will be used to control SKITTER, a three-legged lunar walking machine. Under development at Georgia Tech, SKITTER will be a multi-purpose, un-manned vehicle capable of preparing a site for the proposed lunar base in advance of the arrival of men. This walking machine will be able to accept modular special purpose tools, such as a crane, a core sampling drill, and a digging device, among others. The project was concerned with the design of a human interface which could be used, from earth, to control the movements of SKITTER on the lunar surface. Preliminary inquiries were also made into necessary modifications required to adapt the panel to both a shirt-sleeve lunar environment and to a mobile unit which could be used by a man in a space suit at a lunar work site.

  20. Training to use a commercial brain-computer interface as access technology: a case study.

    PubMed

    Taherian, Sarvnaz; Selitskiy, Dmitry; Pau, James; Davies, T Claire; Owens, R Glynn

    2016-01-01

    This case study describes how an individual with spastic quadriplegic cerebral palsy was trained over a period of four weeks to use a commercial electroencephalography (EEG)-based brain-computer interface (BCI). The participant spent three sessions exploring the system, and seven sessions playing a game focused on EEG feedback training of left and right arm motor imagery and a customised, training game paradigm was employed. The participant showed improvement in the production of two distinct EEG patterns. The participant's performance was influenced by motivation, fatigue and concentration. Six weeks post-training the participant could still control the BCI and used this to type a sentence using an augmentative and alternative communication application on a wirelessly linked device. The results from this case study highlight the importance of creating a dynamic, relevant and engaging training environment for BCIs. Implications for Rehabilitation Customising a training paradigm to suit the users' interests can influence adherence to assistive technology training. Mood, fatigue, physical illness and motivation influence the usability of a brain-computer interface. Commercial brain-computer interfaces, which require little set up time, may be used as access technology for individuals with severe disabilities.

  1. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective

    PubMed Central

    Mattout, Jérémie

    2012-01-01

    A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291

  2. Information Presentation and Control in a Modern Air Traffic Control Tower Simulator

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Doubek, Sharon; Rabin, Boris; Harke, Stanton

    1996-01-01

    The proper presentation and management of information in America's largest and busiest (Level V) air traffic control towers calls for an in-depth understanding of many different human-computer considerations: user interface design for graphical, radar, and text; manual and automated data input hardware; information/display output technology; reconfigurable workstations; workload assessment; and many other related subjects. This paper discusses these subjects in the context of the Surface Development and Test Facility (SDTF) currently under construction at NASA's Ames Research Center, a full scale, multi-manned, air traffic control simulator which will provide the "look and feel" of an actual airport tower cab. Special emphasis will be given to the human-computer interfaces required for the different kinds of information displayed at the various controller and supervisory positions and to the computer-aided design (CAD) and other analytic, computer-based tools used to develop the facility.

  3. Role of optical computers in aeronautical control applications

    NASA Technical Reports Server (NTRS)

    Baumbick, R. J.

    1981-01-01

    The role that optical computers play in aircraft control is determined. The optical computer has the potential high speed capability required, especially for matrix/matrix operations. The optical computer also has the potential for handling nonlinear simulations in real time. They are also more compatible with fiber optic signal transmission. Optics also permit the use of passive sensors to measure process variables. No electrical energy need be supplied to the sensor. Complex interfacing between optical sensors and the optical computer is avoided if the optical sensor outputs can be directly processed by the optical computer.

  4. Intelligent user interface concept for space station

    NASA Technical Reports Server (NTRS)

    Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen

    1986-01-01

    The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.

  5. Tape/head interface study

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Existing high energy tapes, high track density heads, and transport guidance techniques were evaluated and characterized to enable these technologies to be employed in future spacecraft recorders with high confidence. The results of these study efforts demonstrated tracking accuracy tape and head density that will support spacecraft recorders with data rates of a minimum of 150 Mbps and storage capacities ranging from 10 to the 10th to 10 to the 11th bits. Seven high energy tapes of either .25 in width, 1.00 in width, or both, were tested. All tapes were tested at the same speed (30 ips) and the same packing density (33 KBI). The performance of all 1 in tapes was considered superior.

  6. Digital interface for bi-directional communication between a computer and a peripheral device

    NASA Technical Reports Server (NTRS)

    Bond, H. H., Jr. (Inventor); Franklin, C. R.

    1984-01-01

    For transmission of data from the computer to the peripheral, the computer initially clears a flipflop which provides a select signal to a multiplexer. A data available signal or data strobe signal is produced while tht data is being provided to the interface. Setting of the flipflop causes a gate to provide to the peripherial a signal indicating that the interface has data available for transmission. The peripheral provides an acknowledge or strobe signal to transfer the data to the peripheral. For transmission of data from the peripheral to the computer, the computer presents the initially cleared flipflop. A data request signal from the peripheral indicates that the peripheral has data available for transmission to the computer. An acknowledge signal indicates that the interface is ready to receive data from the peripheral and to strobe that data into the interface.

  7. Acoustic pressure waves induced in human heads by RF pulses from high-field MRI scanners.

    PubMed

    Lin, James C; Wang, Zhangwei

    2010-04-01

    The current evolution toward greater image resolution from magnetic resonance image (MRI) scanners has prompted the exploration of higher strength magnetic fields and use of higher levels of radio frequencies (RFs). Auditory perception of RF pulses by humans has been reported during MRI with head coils. It has shown that the mechanism of interaction for the auditory effect is caused by an RF pulse-induced thermoelastic pressure wave inside the head. We report a computational study of the intensity and frequency of thermoelastic pressure waves generated by RF pulses in the human head inside high-field MRI and clinical scanners. The U.S. Food and Drug Administration (U.S. FDA) guides limit the local specific absorption rate (SAR) in the body-including the head-to 8 W kg(-1). We present results as functions of SAR and show that for a given SAR the peak acoustic pressures generated in the anatomic head model were essentially the same at 64, 300, and 400 MHz (1.5, 7.0, and 9.4 T). Pressures generated in the anatomic head are comparable to the threshold pressure of 20 mPa for sound perception by humans at the cochlea for 4 W kg(-1). Moreover, results indicate that the peak acoustic pressure in the brain is only 2 to 3 times the auditory threshold at the U.S. FDA guideline of 8 W kg(-1). Even at a high SAR of 20 W kg(-1), where the acoustic pressure in the brain could be more than 7 times the auditory threshold, the sound pressure levels would not be more than 17 db above threshold of perception at the cochlea.

  8. Variability in the control of head movements in seated humans: a link with whiplash injuries?

    PubMed Central

    Vibert, N; MacDougall, H G; de Waele, C; Gilchrist, D P D; Burgess, A M; Sidis, A; Migliaccio, A; Curthoys, I S; Vidal, P P

    2001-01-01

    The aim of this study was to determine how context and on-line sensory information are combined to control posture in seated subjects submitted to high-jerk, passive linear accelerations. Subjects were seated with eyes closed on a servo-controlled linear sled. They were asked to relax and received brief accelerations either sideways or in the fore-aft direction. The stimuli had an abrupt onset, comparable to the jerk experienced during a minor car collision. Rotation and translation of the head and body were measured using an Optotrak system. In some of the subjects, surface electromyographic (EMG) responses of selected neck and/or back muscles were recorded simultaneously. For each subject, responses were highly stereotyped from the first trial, and showed little sign of habituation or sensitisation. Comparable results were obtained with sideways and fore-aft accelerations. During each impulse, the head lagged behind the trunk for several tens of milliseconds. The subjects' head movement responses were distributed as a continuum in between two extreme categories. The ‘stiff’ subjects showed little rotation or translation of the head relative to the trunk for the whole duration of the impulse. In contrast, the ‘floppy’ subjects showed a large roll or pitch of the head relative to the trunk in the direction opposite to the sled movement. This response appeared as an exaggerated ‘inertial’ response to the impulse. Surface EMG recordings showed that most of the stiff subjects were not contracting their superficial neck or back muscles. We think they relied on bilateral contractions of their deep, axial musculature to keep the head-neck ensemble in line with the trunk during the movement. About half of the floppy subjects displayed reflex activation of the neck muscles on the side opposite to the direction of acceleration, which occurred before or during the head movement and tended to exaggerate it. The other floppy subjects seemed to rely on only the

  9. Supporting Flight Control for UAV-Assisted Wilderness Search and Rescue Through Human Centered Interface Design

    DTIC Science & Technology

    2007-12-01

    have found that increased realism typically makes a more impressive looking interface, but not always a more effective interface [53]. Some display...intended positions. Other, more cinematic meth- ods may look more impressive, but looking better is not necessarily more effective at supporting... realism : Misplaced faith in realis- tic displays. Ergonomics in Design: Magazine of Human Factors Applications, 13(3):6–13, 2005. [54] H. S. Smallman, M. St

  10. Upper Body-Based Power Wheelchair Control Interface for Individuals with Tetraplegia

    PubMed Central

    Thorp, Elias B.; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica; Pierella, Camilla; Roth, Elliot J.; Gonzalez, Ismael Seanez; Mussa-Ivaldi, Ferdinando A.

    2016-01-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user’s residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional controls commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control the power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control. PMID:26054071

  11. Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia.

    PubMed

    Thorp, Elias B; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica P; Pierella, Camilla; Roth, Elliot J; Seanez Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A

    2016-02-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user's residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional control commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control a power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control.

  12. The Selection of the Appropriate Computer Interface Device for Patients With High Cervical Cord Injury

    PubMed Central

    Kim, Dong-Goo; Lim, Sung Eun; Kim, Dong-A; Hwang, Sung Il; Yim, You-lim; Park, Jeong Mi

    2013-01-01

    In order to determine the most suitable computer interfaces for patients with high cervical cord injury, we report three cases of applications of special input devices. The first was a 49-year-old patient with neurological level of injury (NLI) C4, American Spinal Injury Association Impairment Scale (ASIA)-A. He could move the cursor by using a webcam-based Camera Mouse. Moreover, clicking the mouse could only be performed by pronation of the forearm on the modified Micro Light Switch. The second case was a 41-year-old patient with NLI C3, ASIA-A. The SmartNav 4AT which responds according to head movements could provide stable performance in clicking and dragging. The third was a 13-year-old patient with NLI C1, ASIA-B. The IntegraMouse enabling clicking and dragging with fine movements of the lips. Selecting the appropriate interface device for patients with high cervical cord injury could be considered an important part of rehabilitation. We expect the standard proposed in this study will be helpful. PMID:23869346

  13. Heading-vector navigation based on head-direction cells and path integration.

    PubMed

    Kubie, John L; Fenton, André A

    2009-05-01

    Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal

  14. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  15. Pilot-Vehicle Interface

    DTIC Science & Technology

    1993-11-01

    way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory

  16. Biomechanical Modeling of the Human Head

    DTIC Science & Technology

    2017-10-03

    between model predictions and experimental data. This report details model calibration for all materials identified in models of a human head and...14 3 Stress-strain data for the pia mater and dura mater (human subject); experimental data orig- inally presented in [28...treated as one material) based on a hyperelastic model and experimental data from [59] ............................................... 20 5 Comparison of

  17. Collaborative Approach in the Development of High‐Performance Brain–Computer Interfaces for a Neuroprosthetic Arm: Translation from Animal Models to Human Control

    PubMed Central

    Collinger, Jennifer L.; Kryger, Michael A.; Barbara, Richard; Betler, Timothy; Bowsher, Kristen; Brown, Elke H. P.; Clanton, Samuel T.; Degenhart, Alan D.; Foldes, Stephen T.; Gaunt, Robert A.; Gyulai, Ferenc E.; Harchick, Elizabeth A.; Harrington, Deborah; Helder, John B.; Hemmes, Timothy; Johannes, Matthew S.; Katyal, Kapil D.; Ling, Geoffrey S. F.; McMorland, Angus J. C.; Palko, Karina; Para, Matthew P.; Scheuermann, Janet; Schwartz, Andrew B.; Skidmore, Elizabeth R.; Solzbacher, Florian; Srikameswaran, Anita V.; Swanson, Dennis P.; Swetz, Scott; Tyler‐Kabara, Elizabeth C.; Velliste, Meel; Wang, Wei; Weber, Douglas J.; Wodlinger, Brian

    2013-01-01

    Abstract Our research group recently demonstrated that a person with tetraplegia could use a brain–computer interface (BCI) to control a sophisticated anthropomorphic robotic arm with skill and speed approaching that of an able‐bodied person. This multiyear study exemplifies important principles in translating research from foundational theory and animal experiments into a clinical study. We present a roadmap that may serve as an example for other areas of clinical device research as well as an update on study results. Prior to conducting a multiyear clinical trial, years of animal research preceded BCI testing in an epilepsy monitoring unit, and then in a short‐term (28 days) clinical investigation. Scientists and engineers developed the necessary robotic and surgical hardware, software environment, data analysis techniques, and training paradigms. Coordination among researchers, funding institutes, and regulatory bodies ensured that the study would provide valuable scientific information in a safe environment for the study participant. Finally, clinicians from neurosurgery, anesthesiology, physiatry, psychology, and occupational therapy all worked in a multidisciplinary team along with the other researchers to conduct a multiyear BCI clinical study. This teamwork and coordination can be used as a model for others attempting to translate basic science into real‐world clinical situations. PMID:24528900

  18. Design and Implementation of an Interface Editor for the Amadeus Multi- Relational Database Front-end System

    DTIC Science & Technology

    1993-03-25

    application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data

  19. Dynamic response due to behind helmet blunt trauma measured with a human head surrogate.

    PubMed

    Freitas, Christopher J; Mathis, James T; Scott, Nikki; Bigger, Rory P; Mackiewicz, James

    2014-01-01

    A Human Head Surrogate has been developed for use in behind helmet blunt trauma experiments. This human head surrogate fills the void between Post-Mortem Human Subject testing (with biofidelity but handling restrictions) and commercial ballistic head forms (with no biofidelity but ease of use). This unique human head surrogate is based on refreshed human craniums and surrogate materials representing human head soft tissues such as the skin, dura, and brain. A methodology for refreshing the craniums is developed and verified through material testing. A test methodology utilizing these unique human head surrogates is also developed and then demonstrated in a series of experiments in which non-perforating ballistic impact of combat helmets is performed with and without supplemental ceramic appliques for protecting against larger caliber threats. Sensors embedded in the human head surrogates allow for direct measurement of intracranial pressure, cranial strain, and head and helmet acceleration. Over seventy (70) fully instrumented experiments have been executed using this unique surrogate. Examples of the data collected are presented. Based on these series of tests, the Southwest Research Institute (SwRI) Human Head Surrogate has demonstrated great potential for providing insights in to injury mechanics resulting from non-perforating ballistic impact on combat helmets, and directly supports behind helmet blunt trauma studies.

  20. Dynamic Response Due to Behind Helmet Blunt Trauma Measured with a Human Head Surrogate

    PubMed Central

    Freitas, Christopher J.; Mathis, James T.; Scott, Nikki; Bigger, Rory P.; MacKiewicz, James

    2014-01-01

    A Human Head Surrogate has been developed for use in behind helmet blunt trauma experiments. This human head surrogate fills the void between Post-Mortem Human Subject testing (with biofidelity but handling restrictions) and commercial ballistic head forms (with no biofidelity but ease of use). This unique human head surrogate is based on refreshed human craniums and surrogate materials representing human head soft tissues such as the skin, dura, and brain. A methodology for refreshing the craniums is developed and verified through material testing. A test methodology utilizing these unique human head surrogates is also developed and then demonstrated in a series of experiments in which non-perforating ballistic impact of combat helmets is performed with and without supplemental ceramic appliques for protecting against larger caliber threats. Sensors embedded in the human head surrogates allow for direct measurement of intracranial pressure, cranial strain, and head and helmet acceleration. Over seventy (70) fully instrumented experiments have been executed using this unique surrogate. Examples of the data collected are presented. Based on these series of tests, the Southwest Research Institute (SwRI) Human Head Surrogate has demonstrated great potential for providing insights in to injury mechanics resulting from non-perforating ballistic impact on combat helmets, and directly supports behind helmet blunt trauma studies. PMID:24688303