Sample records for human-machine interface

  1. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  2. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  3. Future Cyborgs: Human-Machine Interface for Virtual Reality Applications

    DTIC Science & Technology

    2007-04-01

    FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford

  4. Man-systems integration and the man-machine interface

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1990-01-01

    Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).

  5. Gloved Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  6. Learning Machine, Vietnamese Based Human-Computer Interface.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…

  7. Multiple man-machine interfaces

    NASA Technical Reports Server (NTRS)

    Stanton, L.; Cook, C. W.

    1981-01-01

    The multiple man machine interfaces inherent in military pilot training, their social implications, and the issue of possible negative feedback were explored. Modern technology has produced machines which can see, hear, and touch with greater accuracy and precision than human beings. Consequently, the military pilot is more a systems manager, often doing battle against a target he never sees. It is concluded that unquantifiable human activity requires motivation that is not intrinsic in a machine.

  8. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  9. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  10. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  11. Knowledge-based load leveling and task allocation in human-machine systems

    NASA Technical Reports Server (NTRS)

    Chignell, M. H.; Hancock, P. A.

    1986-01-01

    Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.

  12. A Function-Behavior-State Approach to Designing Human Machine Interface for Nuclear Power Plant Operators

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Zhang, W. J.

    2005-02-01

    This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.

  13. Human machine interface display design document.

    DOT National Transportation Integrated Search

    2008-01-01

    The purpose of this document is to describe the design for the human machine interface : (HMI) display for the Next Generation 9-1-1 (NG9-1-1) System (or system of systems) : based on the initial Tier 1 requirements identified for the NG9-1-1 S...

  14. EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.

    PubMed

    Yin, Yue H; Fan, Yuan J; Xu, Li D

    2012-07-01

    Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.

  15. Design Control Systems of Human Machine Interface in the NTVS-2894 Seat Grinder Machine to Increase the Productivity

    NASA Astrophysics Data System (ADS)

    Ardi, S.; Ardyansyah, D.

    2018-02-01

    In the Manufacturing of automotive spare parts, increased sales of vehicles is resulted in increased demand for production of engine valve of the customer. To meet customer demand, we carry out improvement and overhaul of the NTVS-2894 seat grinder machine on a machining line. NTVS-2894 seat grinder machine has been decreased machine productivity, the amount of trouble, and the amount of downtime. To overcome these problems on overhaul the NTVS-2984 seat grinder machine include mechanical and programs, is to do the design and manufacture of HMI (Human Machine Interface) GP-4501T program. Because of the time prior to the overhaul, NTVS-2894 seat grinder machine does not have a backup HMI (Human Machine Interface) program. The goal of the design and manufacture in this program is to improve the achievement of production, and allows an operator to operate beside it easier to troubleshoot the NTVS-2894 seat grinder machine thereby reducing downtime on the NTVS-2894 seat grinder machine. The results after the design are HMI program successfully made it back, machine productivity increased by 34.8%, the amount of trouble, and downtime decreased 40% decrease from 3,160 minutes to 1,700 minutes. The implication of our design, it could facilitate the operator in operating machine and the technician easer to maintain and do the troubleshooting the machine problems.

  16. All printed touchless human-machine interface based on only five functional materials

    NASA Astrophysics Data System (ADS)

    Scheipl, G.; Zirkl, M.; Sawatdee, A.; Helbig, U.; Krause, M.; Kraker, E.; Andersson Ersman, P.; Nilsson, D.; Platt, D.; Bodö, P.; Bauer, S.; Domann, G.; Mogessie, A.; Hartmann, Paul; Stadlober, B.

    2012-02-01

    We demonstrate the printing of a complex smart integrated system using only five functional inks: the fluoropolymer P(VDF:TrFE) (Poly(vinylidene fluoride trifluoroethylene) sensor ink, the conductive polymer PEDOT:PSS (poly(3,4 ethylenedioxythiophene):poly(styrene sulfonic acid) ink, a conductive carbon paste, a polymeric electrolyte and SU8 for separation. The result is a touchless human-machine interface, including piezo- and pyroelectric sensor pixels (sensitive to pressure changes and impinging infrared light), transistors for impedance matching and signal conditioning, and an electrochromic display. Applications may not only emerge in human-machine interfaces, but also in transient temperature or pressure sensing used in safety technology, in artificial skins and in disposable sensor labels.

  17. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  18. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  19. MARTI: man-machine animation real-time interface

    NASA Astrophysics Data System (ADS)

    Jones, Christian M.; Dlay, Satnam S.

    1997-05-01

    The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.

  20. Literate Specification: Using Design Rationale To Support Formal Methods in the Development of Human-Machine Interfaces.

    ERIC Educational Resources Information Center

    Johnson, Christopher W.

    1996-01-01

    The development of safety-critical systems (aircraft cockpits and reactor control rooms) is qualitatively different from that of other interactive systems. These differences impose burdens on design teams that must ensure the development of human-machine interfaces. Analyzes strengths and weaknesses of formal methods for the design of user…

  1. Adapting human-machine interfaces to user performance.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2008-01-01

    The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.

  2. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  3. Diverse applications of advanced man-telerobot interfaces

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas A.

    1991-01-01

    Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.

  4. Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface

    PubMed Central

    Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong

    2017-01-01

    There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485

  5. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  6. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  7. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  8. Future developments in brain-machine interface research.

    PubMed

    Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L

    2011-01-01

    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition.

  9. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.

    PubMed

    Eom, Hwisoo; Lee, Sang Hun

    2015-06-12

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.

  10. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles

    PubMed Central

    Eom, Hwisoo; Lee, Sang Hun

    2015-01-01

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model. PMID:26076406

  11. Human-machine interface hardware: The next decade

    NASA Technical Reports Server (NTRS)

    Marcus, Elizabeth A.

    1991-01-01

    In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.

  12. Design of Human-Machine Interface and altering of pelvic obliquity with RGR Trainer.

    PubMed

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2011-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system's ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking - in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. © 2011 IEEE

  13. Design of Human – Machine Interface and Altering of Pelvic Obliquity with RGR Trainer

    PubMed Central

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2012-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system’s ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking – in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. PMID:22275693

  14. Future developments in brain-machine interface research

    PubMed Central

    Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L

    2011-01-01

    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition. PMID:21779720

  15. Transfer of control system interface solutions from other domains to the thermal power industry.

    PubMed

    Bligård, L-O; Andersson, J; Osvalder, A-L

    2012-01-01

    In a thermal power plant the operators' roles are to control and monitor the process to achieve efficient and safe production. To achieve this, the human-machine interfaces have a central part. The interfaces need to be updated and upgraded together with the technical functionality to maintain optimal operation. One way of achieving relevant updates is to study other domains and see how they have solved similar issues in their design solutions. The purpose of this paper is to present how interface design solution ideas can be transferred from domains with operator control to thermal power plants. In the study 15 domains were compared using a model for categorisation of human-machine systems. The result from the domain comparison showed that nuclear power, refinery and ship engine control were most similar to thermal power control. From the findings a basic interface structure and three specific display solutions were proposed for thermal power control: process parameter overview, plant overview, and feed water view. The systematic comparison of the properties of a human-machine system allowed interface designers to find suitable objects, structures and navigation logics in a range of domains that could be transferred to the thermal power domain.

  16. [Human machines--mechanical humans? The industrial arrangement of the relation between human being and machine on the basis of psychotechnik and Georg Schlesingers work with disabled soldiers].

    PubMed

    Patzel-Mattern, Katja

    2005-01-01

    The 20th Century is the century of of technical artefacts. With their existance and use they create an artificial reality, within which humans have to position themselves. Psychotechnik is an attempt to enable humans for this positioning. It gained importance in Germany after World War I and had its heyday between 1919 and 1926. On the basis of the activity of the engineer and supporter of Psychotechnik Georg Schlesinger, whose particular interest were disabled soldiers, the essay on hand will investigate the understanding of the body and the human being of Psychotechnik as an applied science. It turned out, that the biggest achievement of Psychotechnik was to establish a new view of the relation between human being and machine. Thus it helped to show that the human-machine-interface is a shapable unit. Psychotechnik sees the human body and its physique as the last instance for the design of machines. Its main concern is to optimize the relation between human being and machine rather than to standardize human beings according to the construction of machines. After her splendid rise during the Weimar Republic and her rapid decline since the late 1920s Psychotechnik nowadays gains scientifical attention as a historical phenomenon. The main attention in the current discourse lies on the aspects conserning philosophy of science: the unity of body and soul, the understanding of the human-machine-interface as a shapable unit and the human being as a last instance of this unit.

  17. Man-machine interface for the control of a lunar transport machine

    NASA Technical Reports Server (NTRS)

    Ashley, Richard; Bacon, Loring; Carlton, Scott Tim; May, Mark; Moore, Jimmy; Peek, Dennis

    1987-01-01

    A proposed first generation human interface control panel is described which will be used to control SKITTER, a three-legged lunar walking machine. Under development at Georgia Tech, SKITTER will be a multi-purpose, un-manned vehicle capable of preparing a site for the proposed lunar base in advance of the arrival of men. This walking machine will be able to accept modular special purpose tools, such as a crane, a core sampling drill, and a digging device, among others. The project was concerned with the design of a human interface which could be used, from earth, to control the movements of SKITTER on the lunar surface. Preliminary inquiries were also made into necessary modifications required to adapt the panel to both a shirt-sleeve lunar environment and to a mobile unit which could be used by a man in a space suit at a lunar work site.

  18. Materials and optimized designs for human-machine interfaces via epidermal electronics.

    PubMed

    Jeong, Jae-Woong; Yeo, Woon-Hong; Akhtar, Aadeel; Norton, James J S; Kwack, Young-Jin; Li, Shuo; Jung, Sung-Young; Su, Yewang; Lee, Woosik; Xia, Jing; Cheng, Huanyu; Huang, Yonggang; Choi, Woon-Seop; Bretl, Timothy; Rogers, John A

    2013-12-17

    Thin, soft, and elastic electronics with physical properties well matched to the epidermis can be conformally and robustly integrated with the skin. Materials and optimized designs for such devices are presented for surface electromyography (sEMG). The findings enable sEMG from wide ranging areas of the body. The measurements have quality sufficient for advanced forms of human-machine interface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A Tool for Assessing the Text Legibility of Digital Human Machine Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    2015-08-01

    A tool intended to aid qualified professionals in the assessment of the legibility of text presented on a digital display is described. The assessment of legibility is primarily for the purposes of designing and analyzing human machine interfaces in accordance with NUREG-0700 and MIL-STD 1472G. The tool addresses shortcomings of existing guidelines by providing more accurate metrics of text legibility with greater sensitivity to design alternatives.

  20. Reverse-micelle-induced porous pressure-sensitive rubber for wearable human-machine interfaces.

    PubMed

    Jung, Sungmook; Kim, Ji Hoon; Kim, Jaemin; Choi, Suji; Lee, Jongsu; Park, Inhyuk; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2014-07-23

    A novel method to produce porous pressure-sensitive rubber is developed. For the controlled size distribution of embedded micropores, solution-based procedures using reverse micelles are adopted. The piezosensitivity of the pressure sensitive rubber is significantly increased by introducing micropores. Using this method, wearable human-machine interfaces are fabricated, which can be applied to the remote control of a robot. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  2. Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy

    DTIC Science & Technology

    1999-01-01

    communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human

  3. Tactual interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Srinivasan, M. A.

    1991-01-01

    Increasingly complex human-machine interactions, such as in teleoperation or in virtual environments, have necessitated the optimal use of the human tactual channel for information transfer. This need leads to a demand for a basic understanding of how the human tactual system works, so that the tactual interface between the human and the machine can receive the command signals from the human, as well as display the information to the human, in a manner that appears natural to the human. The tactual information consists of two components: (1) contact information which specifies the nature of direct contact with the object; and (2) kinesthetic information which refers to the position and motion of the limbs. This paper is mostly concerned with contact information.

  4. Using machine learning to emulate human hearing for predictive maintenance of equipment

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh; Bent, Graham

    2017-05-01

    At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.

  5. Formal verification of human-automation interaction

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Heymann, Michael

    2002-01-01

    This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.

  6. Design Guidelines and Criteria for User/Operator Transactions with Battlefield Automated Systems. Volume 5. Background Literature

    DTIC Science & Technology

    1981-02-01

    the machine . ARI’s efforts in this area focus on human perfor- mance problems related to interactions with command and control centers, and on issues...improvement of the user- machine interface. Lacking consistent design principles, current practice results in a fragmented and unsystematic approach to system...complexity in the user- machine interface of BAS, ARI supported this effort for develop- me:nt of an online language for Army tactical intelligence

  7. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  8. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  9. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  10. Adaptive displays and controllers using alternative feedback.

    PubMed

    Repperger, D W

    2004-12-01

    Investigations on the design of haptic (force reflecting joystick or force display) controllers were conducted by viewing the display of force information within the context of several different paradigms. First, using analogies from electrical and mechanical systems, certain schemes of the haptic interface were hypothesized which may improve the human-machine interaction with respect to various criteria. A discussion is given on how this interaction benefits the electrical and mechanical system. To generalize this concept to the design of human-machine interfaces, three studies with haptic mechanisms were then synthesized and analyzed.

  11. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    ERIC Educational Resources Information Center

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  12. Biosleeve Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Assad, Christopher (Inventor)

    2016-01-01

    Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.

  13. Visualization tool for human-machine interface designers

    NASA Astrophysics Data System (ADS)

    Prevost, Michael P.; Banda, Carolyn P.

    1991-06-01

    As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.

  14. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  15. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  16. Man-machine interface requirements - advanced technology

    NASA Technical Reports Server (NTRS)

    Remington, R. W.; Wiener, E. L.

    1984-01-01

    Research issues and areas are identified where increased understanding of the human operator and the interaction between the operator and the avionics could lead to improvements in the performance of current and proposed helicopters. Both current and advanced helicopter systems and avionics are considered. Areas critical to man-machine interface requirements include: (1) artificial intelligence; (2) visual displays; (3) voice technology; (4) cockpit integration; and (5) pilot work loads and performance.

  17. Ultrasensitive and Highly Stable Resistive Pressure Sensors with Biomaterial-Incorporated Interfacial Layers for Wearable Health-Monitoring and Human-Machine Interfaces.

    PubMed

    Chang, Hochan; Kim, Sungwoong; Jin, Sumin; Lee, Seung-Woo; Yang, Gil-Tae; Lee, Ki-Young; Yi, Hyunjung

    2018-01-10

    Flexible piezoresistive sensors have huge potential for health monitoring, human-machine interfaces, prosthetic limbs, and intelligent robotics. A variety of nanomaterials and structural schemes have been proposed for realizing ultrasensitive flexible piezoresistive sensors. However, despite the success of recent efforts, high sensitivity within narrower pressure ranges and/or the challenging adhesion and stability issues still potentially limit their broad applications. Herein, we introduce a biomaterial-based scheme for the development of flexible pressure sensors that are ultrasensitive (resistance change by 5 orders) over a broad pressure range of 0.1-100 kPa, promptly responsive (20 ms), and yet highly stable. We show that employing biomaterial-incorporated conductive networks of single-walled carbon nanotubes as interfacial layers of contact-based resistive pressure sensors significantly enhances piezoresistive response via effective modulation of the interlayer resistance and provides stable interfaces for the pressure sensors. The developed flexible sensor is capable of real-time monitoring of wrist pulse waves under external medium pressure levels and providing pressure profiles applied by a thumb and a forefinger during object manipulation at a low voltage (1 V) and power consumption (<12 μW). This work provides a new insight into the material candidates and approaches for the development of wearable health-monitoring and human-machine interfaces.

  18. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  19. Best face forward.

    PubMed

    Rayport, Jeffrey F; Jaworski, Bernard J

    2004-12-01

    Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.

  20. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  1. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  2. Delivering key signals to the machine: seeking the electric signal that muscles emanate

    NASA Astrophysics Data System (ADS)

    Bani Hashim, A. Y.; Maslan, M. N.; Izamshah, R.; Mohamad, I. S.

    2014-11-01

    Due to the limitation of electric power generation in the human body, present human-machine interfaces have not been successful because of the nature of standard electronics circuit designs, which do not consider the specifications of signals that resulted from the skin. In general, the outcomes and applications of human-machine interfaces are limited to custom-designed subsystems, such as neuroprosthesis. We seek to model the bio dynamical of sub skin into equivalent mathematical definitions, descriptions, and theorems. Within the human skin, there are networks of nerves that permit the skin to function as a multi dimension transducer. We investigate the nature of structural skin. Apart from multiple networks of nerves, there are other segments within the skin such as minute muscles. We identify the segments that are active when there is an electromyography activity. When the nervous system is firing signals, the muscle is being stimulated. We evaluate the phenomena of biodynamic of the muscles that is concerned with the electromyography activity of the nervous system. In effect, we design a relationship between the human somatosensory and synthetic systems sensory as the union of a complete set of the new domain of the functional system. This classifies electromyogram waveforms linked to intent thought of an operator. The system will become the basis for delivering key signals to machine such that the machine is under operator's intent, hence slavery.

  3. Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface

    PubMed Central

    De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara

    2011-01-01

    A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574

  4. Emotion detection from text

    NASA Astrophysics Data System (ADS)

    Ramalingam, V. V.; Pandian, A.; Jaiswal, Abhijeet; Bhatia, Nikhar

    2018-04-01

    This paper presents a novel method based on concept of Machine Learning for Emotion Detection using various algorithms of Support Vector Machine and major emotions described are linked to the Word-Net for enhanced accuracy. The approach proposed plays a promising role to augment the Artificial Intelligence in the near future and could be vital in optimization of Human-Machine Interface.

  5. What Do We Really Need? Visions of an Ideal Human-Machine Interface for NOTES Mechatronic Support Systems From the View of Surgeons, Gastroenterologists, and Medical Engineers.

    PubMed

    Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus

    2015-08-01

    To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.

  6. A study of speech interfaces for the vehicle environment.

    DOT National Transportation Integrated Search

    2013-05-01

    Over the past few years, there has been a shift in automotive human machine interfaces from : visual-manual interactions (pushing buttons and rotating knobs) to speech interaction. In terms of : distraction, the industry views speech interaction as a...

  7. Human factors in space telepresence

    NASA Technical Reports Server (NTRS)

    Akin, D. L.; Howard, R. D.; Oliveria, J. S.

    1983-01-01

    The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing.

  8. PubMed Central

    Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele

    2016-01-01

    This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314

  9. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  10. Soft Material-Enabled, Flexible Hybrid Electronics for Medicine, Healthcare, and Human-Machine Interfaces

    PubMed Central

    Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon

    2018-01-01

    Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas. PMID:29364861

  11. Soft Material-Enabled, Flexible Hybrid Electronics for Medicine, Healthcare, and Human-Machine Interfaces.

    PubMed

    Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon; Yeo, Woon-Hong

    2018-01-24

    Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.

  12. Screen-Printed Washable Electronic Textiles as Self-Powered Touch/Gesture Tribo-Sensors for Intelligent Human-Machine Interaction.

    PubMed

    Cao, Ran; Pu, Xianjie; Du, Xinyu; Yang, Wei; Wang, Jiaona; Guo, Hengyu; Zhao, Shuyu; Yuan, Zuqing; Zhang, Chi; Li, Congju; Wang, Zhong Lin

    2018-05-22

    Multifunctional electronic textiles (E-textiles) with embedded electric circuits hold great application prospects for future wearable electronics. However, most E-textiles still have critical challenges, including air permeability, satisfactory washability, and mass fabrication. In this work, we fabricate a washable E-textile that addresses all of the concerns and shows its application as a self-powered triboelectric gesture textile for intelligent human-machine interfacing. Utilizing conductive carbon nanotubes (CNTs) and screen-printing technology, this kind of E-textile embraces high conductivity (0.2 kΩ/sq), high air permeability (88.2 mm/s), and can be manufactured on common fabric at large scales. Due to the advantage of the interaction between the CNTs and the fabrics, the electrode shows excellent stability under harsh mechanical deformation and even after being washed. Moreover, based on a single-electrode mode triboelectric nanogenerator and electrode pattern design, our E-textile exhibits highly sensitive touch/gesture sensing performance and has potential applications for human-machine interfacing.

  13. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  14. DESIGN AND EVALUATION OF INDIVIDUAL ELEMENTS OF THE INTERFACE FOR AN AGRICULTURAL MACHINE.

    PubMed

    Rakhra, Aadesh K; Mann, Danny D

    2018-01-29

    If a user-centered approach is not used to design information displays, the quantity and quality of information presented to the user may not match the needs of the user, or it may exceed the capability of the human operator for processing and using that information. The result may be an excessive mental workload and reduced situation awareness of the operator, which can negatively affect the machine performance and operational outcomes. The increasing use of technology in agricultural machines may expose the human operator to excessive and undesirable information if the operator's information needs and information processing capabilities are ignored. In this study, a user-centered approach was used to design specific interface elements for an agricultural air seeder. Designs of the interface elements were evaluated in a laboratory environment by developing high-fidelity prototypes. Evaluations of the user interface elements yielded significant improvement in situation awareness (up to 11%; overall mean difference = 5.0 (4.8%), 95% CI (6.4728, 3.5939), p 0.0001). Mental workload was reduced by up to 19.7% (overall mean difference = -5.2 (-7.9%), n = 30, a = 0.05). Study participants rated the overall performance of the newly designed user-centered interface elements higher in comparison to the previous designs (overall mean difference = 27.3 (189.8%), 99% CI (35.150, 19.384), p 0.0001. Copyright© by the American Society of Agricultural Engineers.

  15. New generation emerging technologies for neurorehabilitation and motor assistance.

    PubMed

    Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele

    2016-12-01

    This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.

  16. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  17. Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces

    NASA Astrophysics Data System (ADS)

    O'Connor, Timothy Francis, III

    Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.

  18. Multichannel noninvasive human-machine interface via stretchable µm thick sEMG patches for robot manipulation

    NASA Astrophysics Data System (ADS)

    Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn

    2018-01-01

    Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.

  19. Analysis of operational comfort in manual tasks using human force manipulability measure.

    PubMed

    Tanaka, Yoshiyuki; Nishikawa, Kazuo; Yamada, Naoki; Tsuji, Toshio

    2015-01-01

    This paper proposes a scheme for human force manipulability (HFM) based on the use of isometric joint torque properties to simulate the spatial characteristics of human operation forces at an end-point of a limb with feasible magnitudes for a specified limb posture. This is also applied to the evaluation/prediction of operational comfort (OC) when manually operating a human-machine interface. The effectiveness of HFM is investigated through two experiments and computer simulations of humans generating forces by using their upper extremities. Operation force generation with maximum isometric effort can be roughly estimated with an HFM measure computed from information on the arm posture during a maintained posture. The layout of a human-machine interface is then discussed based on the results of operational experiments using an electric gear-shifting system originally developed for robotic devices. The results indicate a strong relationship between the spatial characteristics of the HFM and OC levels when shifting, and the OC is predicted by using a multiple regression model with HFM measures.

  20. Cursor control by Kalman filter with a non-invasive body–machine interface

    PubMed Central

    Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A

    2015-01-01

    Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561

  1. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems (Developpement d’Interfaces Homme-Machine Pour Appuyer la Confiance dans les Systemes Automatises d’Identification au Combat)

    DTIC Science & Technology

    2008-03-31

    on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study

  2. Intelligent Adaptive Interface: A Design Tool for Enhancing Human-Machine System Performances

    DTIC Science & Technology

    2009-10-01

    and customizable. Thus, an intelligent interface should tailor its parameters to certain prescribed specifications or convert itself and adjust to...Computer Interaction 3(2): 87-122. [51] Schereiber, G., Akkermans, H., Anjewierden, A., de Hoog , R., Shadbolt, N., Van de Velde, W., & Wielinga, W

  3. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  4. CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.

    We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human andmore » machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.« less

  5. Human factors issues in telerobotic systems for Space Station Freedom servicing

    NASA Technical Reports Server (NTRS)

    Malone, Thomas B.; Permenter, Kathryn E.

    1990-01-01

    Requirements for Space Station Freedom servicing are described and the state-of-the-art for telerobotic system on-orbit servicing of spacecraft is defined. The projected requirements for the Space Station Flight Telerobotic Servicer (FTS) are identified. Finally, the human factors issues in telerobotic servicing are discussed. The human factors issues are basically three: the definition of the role of the human versus automation in system control; the identification of operator-device interface design requirements; and the requirements for development of an operator-machine interface simulation capability.

  6. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  7. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  8. Human Machine Interfaces for Teleoperators and Virtual Environments: Conference Held in Santa Barbara, California on 4-9 March 1990.

    DTIC Science & Technology

    1990-03-01

    decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal

  9. The remapping of space in motor learning and human-machine interfaces

    PubMed Central

    Mussa-Ivaldi, F.A.; Danziger, Z.

    2009-01-01

    Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553

  10. A vibro-haptic human-machine interface for structural health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mascarenas, David; Plont, Crystal; Brown, Christina

    The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less

  11. A vibro-haptic human-machine interface for structural health monitoring

    DOE PAGES

    Mascarenas, David; Plont, Crystal; Brown, Christina; ...

    2014-11-01

    The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less

  12. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  13. Human perceptual deficits as factors in computer interface test and evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less

  14. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  15. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  16. Digital Systems Validation Handbook. Volume 2. Chapter 19. Pilot - Vehicle Interface

    DTIC Science & Technology

    1993-11-01

    checklists, and other status messages. Voice interactive systems are defi-ed as "the interface between a cooperative human and a machine, which involv -he...Pilot-Vehicle Interface 19-85 5.6.1 Crew Interaction and the Cockpit 19-85 5.6.2 Crew Resource Management and Safety 19-87 5.6.3 Pilot and Crew Training...systems was a "stand-alone" component performing its intended function. Systems and their cockpit interfaces were added as technological advances were

  17. Stretchable, Transparent, Ultrasensitive, and Patchable Strain Sensor for Human-Machine Interfaces Comprising a Nanohybrid of Carbon Nanotubes and Conductive Elastomers.

    PubMed

    Roh, Eun; Hwang, Byeong-Ung; Kim, Doil; Kim, Bo-Yeong; Lee, Nae-Eung

    2015-06-23

    Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.

  18. Simulation of the «COSMONAUT-ROBOT» System Interaction on the Lunar Surface Based on Methods of Machine Vision and Computer Graphics

    NASA Astrophysics Data System (ADS)

    Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.

    2017-05-01

    Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.

  19. FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN

    NASA Astrophysics Data System (ADS)

    Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando

    2014-06-01

    The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.

  20. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  1. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  2. Software Engineering for User Interfaces. Technical Report.

    ERIC Educational Resources Information Center

    Draper, Stephen W.; Norman, Donald A.

    The discipline of software engineering can be extended in a natural way to deal with the issues raised by a systematic approach to the design of human-machine interfaces. The user should be treated as part of the system being designed and projects should be organized to take into account the current lack of a priori knowledge of user interface…

  3. State transition storyboards: A tool for designing the Goldstone solar system radar data acquisition system user interface software

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1987-01-01

    Effective user interface design in software systems is a complex task that takes place without adequate modeling tools. By combining state transition diagrams and the storyboard technique of filmmakers, State Transition Storyboards were developed to provide a detailed modeling technique for the Goldstone Solar System Radar Data Acquisition System human-machine interface. Illustrations are included with a description of the modeling technique.

  4. Man-machine interface issues in space telerobotics: A JPL research and development program

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1987-01-01

    Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.

  5. Hands-free human-machine interaction with voice

    NASA Astrophysics Data System (ADS)

    Juang, B. H.

    2004-05-01

    Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.

  6. Considerations for human-machine interfaces in tele-operations

    NASA Technical Reports Server (NTRS)

    Newport, Curt

    1991-01-01

    Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.

  7. Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.

    PubMed

    Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen

    2012-01-01

    An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Quadcopter control using a BCI

    NASA Astrophysics Data System (ADS)

    Rosca, S.; Leba, M.; Ionica, A.; Gamulescu, O.

    2018-01-01

    The paper presents how there can be interconnected two ubiquitous elements nowadays. On one hand, the drones, which are increasingly present and integrated into more and more fields of activity, beyond the military applications they come from, moving towards entertainment, real-estate, delivery and so on. On the other hand, unconventional man-machine interfaces, which are generous topics to explore now and in the future. Of these, we chose brain computer interface (BCI), which allows human-machine interaction without requiring any moving elements. The research consists of mathematical modeling and numerical simulation of a drone and a BCI. Then there is presented an application using a Parrot mini-drone and an Emotiv Insight BCI.

  9. Techno-Human Mesh: The Growing Power of Information Technologies.

    ERIC Educational Resources Information Center

    West, Cynthia K.

    This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…

  10. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  11. On the applicability of brain reading for predictive human-machine interfaces in robotics.

    PubMed

    Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred

    2013-01-01

    The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.

  12. On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics

    PubMed Central

    Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred

    2013-01-01

    The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125

  13. Interface design in the process industries

    NASA Technical Reports Server (NTRS)

    Beaverstock, M. C.; Stassen, H. G.; Williamson, R. A.

    1977-01-01

    Every operator runs his plant in accord with his own mental model of the process. In this sense, one characteristic of an ideal man-machine interface is that it be in harmony with that model. With this theme in mind, the paper first reviews the functions of the process operator and compares them with human operators involved in control situations previously studied outside the industrial environment (pilots, air traffic controllers, helmsmen, etc.). A brief history of the operator interface in the process industry and the traditional methodology employed in its design is then presented. Finally, a much more fundamental approach utilizing a model definition of the human operator's behavior is presented.

  14. Advanced Aircraft Interfaces: The Machine Side of the Man-Machine Interface (Les Interfaces sur les Avions de Pointe: L’Aspect Machine de l’Interface Homme-Machine)

    DTIC Science & Technology

    1992-10-01

    Manager , Advanced Transport Operating Systems Program Office Langley Research Center Mail Stop 265 Hampton, VA 23665-5225 United States Programme Committee...J.H.Lind, and C.G.Burge Advanced Cockpit - Mission and Image Management 4 by J. Struck Aircrew Acceptance of Automation in the Cockpit 5 by M. Hicks and I...DESIGN CONCEPTS AND TOOLS A Systems Approach to the Advanced Aircraft Man-Machine Interface 23 by F. Armogida Management of Avionics Data in the Cockpit

  15. Structure design of lower limb exoskeletons for gait training

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Ziqiang; Tao, Chunjing; Ji, Run

    2015-09-01

    Due to the close physical interaction between human and machine in process of gait training, lower limb exoskeletons should be safe, comfortable and able to smoothly transfer desired driving force/moments to the patients. Correlatively, in kinematics the exoskeletons are required to be compatible with human lower limbs and thereby to avoid the uncontrollable interactional loads at the human-machine interfaces. Such requirement makes the structure design of exoskeletons very difficult because the human-machine closed chains are complicated. In addition, both the axis misalignments and the kinematic character difference between the exoskeleton and human joints should be taken into account. By analyzing the DOF(degree of freedom) of the whole human-machine closed chain, the human-machine kinematic incompatibility of lower limb exoskeletons is studied. An effective method for the structure design of lower limb exoskeletons, which are kinematically compatible with human lower limb, is proposed. Applying this method, the structure synthesis of the lower limb exoskeletons containing only one-DOF revolute and prismatic joints is investigated; the feasible basic structures of exoskeletons are developed and classified into three different categories. With the consideration of quasi-anthropopathic feature, structural simplicity and wearable comfort of lower limb exoskeletons, a joint replacement and structure comparison based approach to select the ideal structures of lower limb exoskeletons is proposed, by which three optimal exoskeleton structures are obtained. This paper indicates that the human-machine closed chain formed by the exoskeleton and human lower limb should be an even-constrained kinematic system in order to avoid the uncontrollable human-machine interactional loads. The presented method for the structure design of lower limb exoskeletons is universal and simple, and hence can be applied to other kinds of wearable exoskeletons.

  16. Integration Telegram Bot on E-Complaint Applications in College

    NASA Astrophysics Data System (ADS)

    Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.

  17. Measuring human performance on NASA's microgravity aircraft

    NASA Technical Reports Server (NTRS)

    Morris, Randy B.; Whitmore, Mihriban

    1993-01-01

    Measuring human performance in a microgravity environment will aid in identifying the design requirements, human capabilities, safety, and productivity of future astronauts. The preliminary understanding of the microgravity effects on human performance can be achieved through evaluations conducted onboard NASA's KC-135 aircraft. These evaluations can be performed in relation to hardware performance, human-hardware interface, and hardware integration. Measuring human performance in the KC-135 simulated environment will contribute to the efforts of optimizing the human-machine interfaces for future and existing space vehicles. However, there are limitations, such as limited number of qualified subjects, unexpected hardware problems, and miscellaneous plane movements which must be taken into consideration. Examples for these evaluations, the results, and their implications are discussed in the paper.

  18. Research in image management and access

    NASA Technical Reports Server (NTRS)

    Vondran, Raymond F.; Barron, Billy J.

    1993-01-01

    Presently, the problem of over-all library system design has been compounded by the accretion of both function and structure to a basic framework of requirements. While more device power has led to increased functionality, opportunities for reducing system complexity at the user interface level have not always been pursued with equal zeal. The purpose of this book is therefore to set forth and examine these opportunities, within the general framework of human factors research in man-machine interfaces. Human factors may be viewed as a series of trade-off decisions among four polarized objectives: machine resources and user specifications; functionality and user requirements. In the past, a limiting factor was the availability of systems. However, in the last two years, over one hundred libraries supported by many different software configurations have been added to the Internet. This document includes a statistical analysis of human responses to five Internet library systems by key features, development of the ideal online catalog system, and ideal online catalog systems for libraries and information centers.

  19. Triboelectrification based motion sensor for human-machine interfacing.

    PubMed

    Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin

    2014-05-28

    We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.

  20. Operant conditioning of a multiple degree-of-freedom brain-machine interface in a primate model of amputation.

    PubMed

    Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas

    2013-01-01

    Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.

  1. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  2. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  3. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  4. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  5. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    NASA Astrophysics Data System (ADS)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  6. Steering a Tractor by Means of an EMG-Based Human-Machine Interface

    PubMed Central

    Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio

    2011-01-01

    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering. PMID:22164006

  7. Steering a tractor by means of an EMG-based human-machine interface.

    PubMed

    Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio

    2011-01-01

    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver's scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering.

  8. Embedded System for Prosthetic Control Using Implanted Neuromuscular Interfaces Accessed Via an Osseointegrated Implant.

    PubMed

    Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max

    2017-08-01

    Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.

  9. Study About Ceiling Design for Main Control Room of NPP with HFE

    NASA Astrophysics Data System (ADS)

    Gu, Pengfei; Ni, Ying; Chen, Weihua; Chen, Bo; Zhang, Jianbo; Liang, Huihui

    Recently since human factor engineering (HFE) has been used in control room design of nuclear power plant (NPP), the human-machine interface (HMI) has been gradual to develop harmoniously, especially the use of the digital technology. Comparing with the analog technology which was used to human-machine interface in the past, human-machine interaction has been more enhanced. HFE and the main control room (MCR) design engineering of NPP is a combination of multidisciplinary cross, mainly related to electrical and instrument control, reactor, machinery, systems engineering and management disciplines. However, MCR is not only equipped with HMI provided by the equipments, but also more important for the operator to provide a work environment, such as the main control room ceiling. The ceiling design of main control room related to HFE which influences the performance of staff should also be considered in the design of the environment and aesthetic factors, especially the introduction of professional design experience and evaluation method. Based on Ling Ao phase II and Hong Yanhe project implementation experience, the study analyzes lighting effect, space partition, vision load about the ceiling of main control room of NPP. Combining with the requirements of standards, the advantages and disadvantages of the main control room ceiling design has been discussed, and considering the requirements of lightweight, noise reduction, fire prevention, moisture protection, the ceiling design solution of the main control room also has been discussed.

  10. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  11. Development and validation of methods for man-made machine interface evaluation. [for shuttles and shuttle payloads

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Micocci, A.

    1975-01-01

    The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.

  12. Human Factors in Accidents Involving Remotely Piloted Aircraft

    NASA Technical Reports Server (NTRS)

    Merlin, Peter William

    2013-01-01

    This presentation examines human factors that contribute to RPA mishaps and provides analysis of lessons learned. RPA accident data from U.S. military and government agencies were reviewed and analyzed to identify human factors issues. Common contributors to RPA mishaps fell into several major categories: cognitive factors (pilot workload), physiological factors (fatigue and stress), environmental factors (situational awareness), staffing factors (training and crew coordination), and design factors (human machine interface).

  13. Embedded Control System for Smart Walking Assistance Device.

    PubMed

    Bosnak, Matevz; Skrjanc, Igor

    2017-03-01

    This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.

  14. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  15. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing.

    PubMed

    Park, Ki-Woong; Lee, Younho; Baek, Sung Hoon

    2017-08-08

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing ( T-Wing ), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing , we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude.

  16. Matching brain-machine interface performance to space applications.

    PubMed

    Citi, Luca; Tonet, Oliver; Marinelli, Martina

    2009-01-01

    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.

  17. Automation's Effect on Library Personnel.

    ERIC Educational Resources Information Center

    Dakshinamurti, Ganga

    1985-01-01

    Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…

  18. Human-centered automation and AI - Ideas, insights, and issues from the Intelligent Cockpit Aids research effort

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.; Schutte, Paul C.

    1989-01-01

    A development status evaluation is presented for the NASA-Langley Intelligent Cockpit Aids research program, which encompasses AI, human/machine interfaces, and conventional automation. Attention is being given to decision-aiding concepts for human-centered automation, with emphasis on inflight subsystem fault management, inflight mission replanning, and communications management. The cockpit envisioned is for advanced commercial transport aircraft.

  19. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  20. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  1. Generating a Reduced Gravity Environment on Earth

    NASA Technical Reports Server (NTRS)

    Dungan, Larry K.; Cunningham, Tom; Poncia, Dina

    2010-01-01

    Since the 1950s several reduced gravity simulators have been designed and utilized in preparing humans for spaceflight and in reduced gravity system development. The Active Response Gravity Offload System (ARGOS) is the newest and most realistic gravity offload simulator. ARGOS provides three degrees of motion within the test area and is scalable for full building deployment. The inertia of the overhead system is eliminated by an active motor and control system. This presentation will discuss what ARGOS is, how it functions, and the unique challenges of interfacing to the human. Test data and video for human and robotic systems will be presented. A major variable in the human machine interaction is the interface of ARGOS to the human. These challenges along with design solutions will be discussed.

  2. Interactome INSIDER: a structural interactome browser for genomic studies.

    PubMed

    Meyer, Michael J; Beltrán, Juan Felipe; Liang, Siqi; Fragoza, Robert; Rumack, Aaron; Liang, Jin; Wei, Xiaomu; Yu, Haiyuan

    2018-01-01

    We present Interactome INSIDER, a tool to link genomic variant information with structural protein-protein interactomes. Underlying this tool is the application of machine learning to predict protein interaction interfaces for 185,957 protein interactions with previously unresolved interfaces in human and seven model organisms, including the entire experimentally determined human binary interactome. Predicted interfaces exhibit functional properties similar to those of known interfaces, including enrichment for disease mutations and recurrent cancer mutations. Through 2,164 de novo mutagenesis experiments, we show that mutations of predicted and known interface residues disrupt interactions at a similar rate and much more frequently than mutations outside of predicted interfaces. To spur functional genomic studies, Interactome INSIDER (http://interactomeinsider.yulab.org) enables users to identify whether variants or disease mutations are enriched in known and predicted interaction interfaces at various resolutions. Users may explore known population variants, disease mutations, and somatic cancer mutations, or they may upload their own set of mutations for this purpose.

  3. A Cognitive Systems Engineering Approach to Developing Human Machine Interface Requirements for New Technologies

    NASA Astrophysics Data System (ADS)

    Fern, Lisa Carolynn

    This dissertation examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will be deployed into complex systems. A key question for new technologies with increasingly capable automation, is how work will be accomplished by human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by designers. The human machine interface (HMI), which is intended to facilitate human-machine interaction and cooperation, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the predicted performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements for a detect and avoid (DAA) system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to a lack of exploration of different approaches to human-automation cooperation. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed. This document concludes with a look at both the importance of, and the challenges facing, the inclusion of examining human-automation coordination issues as part of the safety assurance activities of new technologies.

  4. Man-machine interfaces in LACIE/ERIPS

    NASA Technical Reports Server (NTRS)

    Duprey, B. B. (Principal Investigator)

    1979-01-01

    One of the most important aspects of the interactive portion of the LACIE/ERIPS software system is the way in which the analysis and decision-making capabilities of a human being are integrated with the speed and accuracy of a computer to produce a powerful analysis system. The three major man-machine interfaces in the system are (1) the use of menus for communications between the software and the interactive user; (2) the checkpoint/restart facility to recreate in one job the internal environment achieved in an earlier one; and (3) the error recovery capability which would normally cause job termination. This interactive system, which executes on an IBM 360/75 mainframe, was adapted for use in noninteractive (batch) mode. A case study is presented to show how the interfaces work in practice by defining some fields based on an image screen display, noting the field definitions, and obtaining a film product of the classification map.

  5. The Body-Machine Interface: A new perspective on an old theme

    PubMed Central

    Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465

  6. A Concept for Optimizing Behavioural Effectiveness & Efficiency

    NASA Astrophysics Data System (ADS)

    Barca, Jan Carlo; Rumantir, Grace; Li, Raymond

    Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.

  7. Neurosurgery and the dawning age of Brain-Machine Interfaces

    PubMed Central

    Rowland, Nathan C.; Breshears, Jonathan; Chang, Edward F.

    2013-01-01

    Brain–machine interfaces (BMIs) are on the horizon for clinical neurosurgery. Electrocorticography-based platforms are less invasive than implanted microelectrodes, however, the latter are unmatched in their ability to achieve fine motor control of a robotic prosthesis capable of natural human behaviors. These technologies will be crucial to restoring neural function to a large population of patients with severe neurologic impairment – including those with spinal cord injury, stroke, limb amputation, and disabling neuromuscular disorders such as amyotrophic lateral sclerosis. On the opposite end of the spectrum are neural enhancement technologies for specialized applications such as combat. An ongoing ethical dialogue is imminent as we prepare for BMI platforms to enter the neurosurgical realm of clinical management. PMID:23653884

  8. Conductive fiber-based ultrasensitive textile pressure sensor for wearable electronics.

    PubMed

    Lee, Jaehong; Kwon, Hyukho; Seo, Jungmok; Shin, Sera; Koo, Ja Hoon; Pang, Changhyun; Son, Seungbae; Kim, Jae Hyung; Jang, Yong Hoon; Kim, Dae Eun; Lee, Taeyoon

    2015-04-17

    A flexible and sensitive textile-based pressure sensor is developed using highly conductive fibers coated with dielectric rubber materials. The pressure sensor exhibits superior sensitivity, very fast response time, and high stability, compared with previous textile-based pressure sensors. By using a weaving method, the pressure sensor can be applied to make smart gloves and clothes that can control machines wirelessly as human-machine interfaces. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Software and Human-Machine Interface Development for Environmental Controls Subsystem Support

    NASA Technical Reports Server (NTRS)

    Dobson, Matthew

    2018-01-01

    The Space Launch System (SLS) is the next premier launch vehicle for NASA. It is the next stage of manned space exploration from American soil, and will be the platform in which we push further beyond Earth orbit. In preparation of the SLS maiden voyage on Exploration Mission 1 (EM-1), the existing ground support architecture at Kennedy Space Center required significant overhaul and updating. A comprehensive upgrade of controls systems was necessary, including programmable logic controller software, as well as Launch Control Center (LCC) firing room and local launch pad displays for technician use. Environmental control acts as an integral component in these systems, being the foremost system for conditioning the pad and extremely sensitive launch vehicle until T-0. The Environmental Controls Subsystem (ECS) required testing and modification to meet the requirements of the designed system, as well as the human factors requirements of NASA software for Validation and Verification (V&V). This term saw significant strides in the progress and functionality of the human-machine interfaces used at the launch pad, and improved integration with the controller code.

  10. Gesture-controlled interfaces for self-service machines and other applications

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  11. Massachusetts Institute of Technology Consortium Agreement

    DTIC Science & Technology

    1999-03-01

    This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.

  12. Terminal Ailments Need Not Be Fatal: A Speculative Assessment of the Impact of Online Public Access Catalogs in Academic Settings.

    ERIC Educational Resources Information Center

    Sandler, Mark

    1985-01-01

    Discusses several concerns about nature of online public access catalogs (OPAC) that have particular import to reference librarians: user passivity and loss of control growing out of "human-machine interface" and the larger social context; and the tendency of computerized bibliographic systems to obfuscate human origins of library…

  13. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    NASA Astrophysics Data System (ADS)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  14. An online brain-machine interface using decoding of movement direction from the human electrocorticogram

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten

    2012-08-01

    A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.

  15. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  16. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  17. A Machine Learning System for Analyzing Human Tactics in a Game

    NASA Astrophysics Data System (ADS)

    Ito, Hirotaka; Tanaka, Toshimitsu; Sugie, Noboru

    In order to realize advanced man-machine interfaces, it is desired to develop a system that can infer the mental state of human users and then return appropriate responses. As the first step toward the above goal, we developed a system capable of inferring human tactics in a simple game played between the system and a human. We present a machine learning system that plays a color expectation game. The system infers the tactics of the opponent, and then decides the action based on the result. We employed a modified version of classifier system like XCS in order to design the system. In addition, three methods are proposed in order to accelerate the learning rate. They are a masking method, an iterative method, and tactics templates. The results of computer experiments confirmed that the proposed methods effectively accelerate the machine learning. The masking method and the iterative method are effective to a simple strategy that considers only a part of past information. However, study speed of these methods is not enough for the tactics that refers to a lot of past information. For the case, the tactics template was able to settle the study rapidly when the tactics is identified.

  18. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing

    PubMed Central

    Baek, Sung Hoon

    2017-01-01

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing (T-Wing), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing, we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude. PMID:28786942

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.R. McJunkin; R.L. Boring; M.A. McQueen

    Situational awareness in the operations and supervision of a industrial system means that decision making entity, whether machine or human, have the important data presented in a timely manner. An optimal presentation of information such that the operator has the best opportunity accurately interpret and react to anomalies due to system degradation, failures or adversaries. Anticipated problems are a matter for system design; however, the paper will focus on concepts for situational awareness enhancement for a human operator when the unanticipated or unaddressed event types occur. Methodology for human machine interface development and refinement strategy is described for a syntheticmore » fuels plant model. A novel concept for adaptively highlighting the most interesting information in the system and a plan for testing the methodology is described.« less

  20. [Mechatronic in functional endoscopic sinus surgery. First experiences with the daVinci Telemanipulatory System].

    PubMed

    Strauss, G; Winkler, D; Jacobs, S; Trantakis, C; Dietz, A; Bootz, F; Meixensberger, J; Falk, V

    2005-07-01

    This study examines the advantages and disadvantages of a commercial telemanipulator system (daVinci, Intuitive Surgical, USA) with computer-guided instruments in functional endoscopic sinus surgery (FESS). We performed five different surgical FESS steps on 14 anatomical preparation and compared them with conventional FESS. A total of 140 procedures were examined taking into account the following parameters: degrees of freedom (DOF), duration , learning curve, force feedback, human-machine-interface. Telemanipulatory instruments have more DOF available then conventional instrumentation in FESS. The average time consumed by configuration of the telemanipulator is around 9+/-2 min. Missing force feedback is evaluated mainly as a disadvantage of the telemanipulator. Scaling was evaluated as helpful. The ergonomic concept seems to be better than the conventional solution. Computer guided instruments showed better results for the available DOF of the instruments. The human-machine-interface is more adaptable and variable then in conventional instrumentation. Motion scaling and indexing are characteristics of the telemanipulator concept which are helpful for FESS in our study.

  1. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands

    PubMed Central

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961

  2. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands.

    PubMed

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.

  3. Human factors with nonhumans - Factors that affect computer-task performance

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.

  4. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  5. A study on the application of voice interaction in automotive human machine interface experience design

    NASA Astrophysics Data System (ADS)

    Huang, Zhaohui; Huang, Xiemin

    2018-04-01

    This paper, firstly, introduces the application trend of the integration of multi-channel interactions in automotive HMI ((Human Machine Interface) from complex information models faced by existing automotive HMI and describes various interaction modes. By comparing voice interaction and touch screen, gestures and other interaction modes, the potential and feasibility of voice interaction in automotive HMI experience design are concluded. Then, the related theories of voice interaction, identification technologies, human beings' cognitive models of voices and voice design methods are further explored. And the research priority of this paper is proposed, i.e. how to design voice interaction to create more humane task-oriented dialogue scenarios to enhance interactive experiences of automotive HMI. The specific scenarios in driving behaviors suitable for the use of voice interaction are studied and classified, and the usability principles and key elements for automotive HMI voice design are proposed according to the scenario features. Then, through the user participatory usability testing experiment, the dialogue processes of voice interaction in automotive HMI are defined. The logics and grammars in voice interaction are classified according to the experimental results, and the mental models in the interaction processes are analyzed. At last, the voice interaction design method to create the humane task-oriented dialogue scenarios in the driving environment is proposed.

  6. Advanced system functions for the office information system

    NASA Astrophysics Data System (ADS)

    Ishikawa, Tetsuya

    First, author describes the functions needed for information management system in office. Next, he mentions the requisites for the enhancement of system functions. In order to make enhancement of system functions, he states, it is necessary to examine them comprehensively from every point of view including processing hour and cost. In this paper, he concentrates on the enhancement of man-machine interface (= human interface), that is, how to make system easy to use for the office workers.

  7. Intelligent Systems and Advanced User Interfaces for Design, Operation, and Maintenance of Command Management Systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1998-01-01

    Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process.

  8. Human factors dimensions in the evolution of increasingly automated control rooms for near-earth satellites

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1982-01-01

    The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.

  9. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  10. UAS Integration in the NAS Project: Part Task 6 V & V Simulation: Primary Results

    NASA Technical Reports Server (NTRS)

    Rorie, Conrad; Fern, Lisa; Shively, Jay; Santiago, Confesor

    2016-01-01

    This is a presentation of the preliminary results on final V and V (Verification and Validation) activity of [RTCA (Radio Technical Commission for Aeronautics)] SC (Special Committee)-228 DAA (Detect and Avoid) HMI (Human-Machine Interface) requirements for display alerting and guidance.

  11. Advanced technologies for Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Dalton, John T.; Hughes, Peter M.

    1991-01-01

    Advance technologies for Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: technology needs; current technology efforts at GSFC (human-machine interface development, object oriented software development, expert systems, knowledge-based software engineering environments, and high performance VLSI telemetry systems); and test beds.

  12. Determining Value in Higher Education: The Future of Instructional Technology in a Wal-Mart Economy.

    ERIC Educational Resources Information Center

    Tremblay, Wilfred

    1992-01-01

    Discusses value and the economy and examines the changing definition of educational value regarding higher education. Trends in instructional technology resulting from changes in expected educational value are described, including resource sharing, specialization, market expansion, privatization, easier human-machine interfaces, feedback systems,…

  13. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  14. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  15. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  16. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  17. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  18. Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.

    PubMed

    Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E

    2017-01-01

    This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.

  19. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  20. Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.

    2001-01-01

    The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.

  1. A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body

    PubMed Central

    Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo

    2016-01-01

    Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body—because human tissues exhibit some conductivity at these frequencies—resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard. PMID:27918416

  2. A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body.

    PubMed

    Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo

    2016-12-02

    Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body-because human tissues exhibit some conductivity at these frequencies-resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard.

  3. Optical HMI with biomechanical energy harvesters integrated in textile supports

    NASA Astrophysics Data System (ADS)

    De Pasquale, G.; Kim, SG; De Pasquale, D.

    2015-12-01

    This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.

  4. The reported incidence of man-machine interface issues in Army aviators using the Aviator's Night Vision System (ANVIS) in a combat theatre

    NASA Astrophysics Data System (ADS)

    Hiatt, Keith L.; Rash, Clarence E.

    2011-06-01

    Background: Army Aviators rely on the ANVIS for night operations. Human factors literature notes that the ANVIS man-machine interface results in reports of visual and spinal complaints. This is the first study that has looked at these issues in the much harsher combat environment. Last year, the authors reported on the statistically significant (p<0.01) increased complaints of visual discomfort, degraded visual cues, and incidence of static and dynamic visual illusions in the combat environment [Proc. SPIE, Vol. 7688, 76880G (2010)]. In this paper we present the findings regarding increased spinal complaints and other man-machine interface issues found in the combat environment. Methods: A survey was administered to Aircrew deployed in support of Operation Enduring Freedom (OEF). Results: 82 Aircrew (representing an aggregate of >89,000 flight hours of which >22,000 were with ANVIS) participated. Analysis demonstrated high complaints of almost all levels of back and neck pain. Additionally, the use of body armor and other Aviation Life Support Equipment (ALSE) caused significant ergonomic complaints when used with ANVIS. Conclusions: ANVIS use in a combat environment resulted in higher and different types of reports of spinal symptoms and other man-machine interface issues over what was previously reported. Data from this study may be more operationally relevant than that of the peacetime literature as it is derived from actual combat and not from training flights, and it may have important implications about making combat predictions based on performance in training scenarios. Notably, Aircrew remarked that they could not execute the mission without ANVIS and ALSE and accepted the degraded ergonomic environment.

  5. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces.

    PubMed

    Liu, Yuhao; Norton, James J S; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R; Liu, Hank; Yan, Lingqing; Tran, Phat L; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A

    2016-11-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics.

  6. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces

    PubMed Central

    Liu, Yuhao; Norton, James J. S.; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R.; Liu, Hank; Yan, Lingqing; Tran, Phat L.; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A.; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J.; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A.

    2016-01-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics. PMID:28138529

  7. Rapid Prototyping and the Human Factors Engineering Process

    DTIC Science & Technology

    2016-08-29

    8217 without the effort and cost associated with conventional man -in-the-loop simulation. Advocates suggest that rapid prototyping is compatible with...use should be made of man -in-the loop simulation to supplement those analyses, but that such simulation is expensive and time consuming, precluding...conventional man -in-the- loop simulation. Rapid prototyping involves the construction and use of an executable model of a human-machine interface

  8. Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.

    PubMed

    Griffiths, Paul G; Gillespie, R Brent

    2005-01-01

    This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.

  9. A Human Factors Perspective on Alarm System Research and Development 2000 to 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curt Braun; John Grimes; Eric Shaver

    By definition, alarms serve to notify human operators of out-of-parameter conditions that could threaten equipment, the environment, product quality and, of course, human life. Given the complexities of industrial systems, human machine interfaces, and the human operator, the understanding of how alarms and humans can best work together to prevent disaster is continually developing. This review examines advances in alarm research and development from 2000 to 2010 and includes the writings of trade professionals, engineering and human factors researchers, and standards organizations with the goal of documenting advances in alarms system design, research, and implementation.

  10. Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control.

    PubMed

    Miller, Christopher A; Parasuraman, Raja

    2007-02-01

    To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.

  11. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. OTM Machine Acceptance: In the Arab Culture

    NASA Astrophysics Data System (ADS)

    Rashed, Abdullah; Santos, Henrique

    Basically, neglecting the human factor is one of the main reasons for system failures or for technology rejection, even when important technologies are considered. Biometrics mostly have the characteristics needed for effortless acceptance, such as easiness and usefulness, that are essential pillars of acceptance models such as TAM (technology acceptance model). However, it should be investigated. Many studies have been carried out to research the issues of technology acceptance in different cultures, especially the western culture. Arabic culture lacks these types of studies with few publications in this field. This paper introduces a new biometric interface for ATM machines. This interface depends on a promising biometrics which is odour. To discover the acceptance of this biometrics, we distributed a questionnaire via a web site and called for participation in the Arab Area and found that most respondents would accept to use odour.

  13. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  14. Personalized keystroke dynamics for self-powered human--machine interfacing.

    PubMed

    Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin

    2015-01-27

    The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.

  15. CESAR research in intelligent machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.

    1986-01-01

    The Center for Engineering Systems Advanced Research (CESAR) was established in 1983 as a national center for multidisciplinary, long-range research and development in machine intelligence and advanced control theory for energy-related applications. Intelligent machines of interest here are artificially created operational systems that are capable of autonomous decision making and action. The initial emphasis for research is remote operations, with specific application to dexterous manipulation in unstructured dangerous environments where explosives, toxic chemicals, or radioactivity may be present, or in other environments with significant risk such as coal mining or oceanographic missions. Potential benefits include reduced risk to man inmore » hazardous situations, machine replication of scarce expertise, minimization of human error due to fear or fatigue, and enhanced capability using high resolution sensors and powerful computers. A CESAR goal is to explore the interface between the advanced teleoperation capability of today, and the autonomous machines of the future.« less

  16. Toward FRP-Based Brain-Machine Interfaces—Single-Trial Classification of Fixation-Related Potentials

    PubMed Central

    Finke, Andrea; Essig, Kai; Marchioro, Giuseppe; Ritter, Helge

    2016-01-01

    The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction. PMID:26812487

  17. Soft brain-machine interfaces for assistive robotics: A novel control approach.

    PubMed

    Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash

    2017-07-01

    Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.

  18. Intelligent man/machine interfaces on the space station

    NASA Technical Reports Server (NTRS)

    Daughtrey, Rodney S.

    1987-01-01

    Some important topics in the development of good, intelligent, usable man/machine interfaces for the Space Station are discussed. These computer interfaces should adhere strictly to three concepts or doctrines: generality, simplicity, and elegance. The motivation for natural language interfaces and their use and value on the Space Station, both now and in the future, are discussed.

  19. People, planners and policy: is there an interface?

    Treesearch

    Susan Kopka

    1979-01-01

    This research attempts to isolate some of the dimensions of human evaluations/perceptions of the built environment through the use of an Audience Response Machine and a video tape of environmental scenes. The results suggest that there are commonalities in peoples' evaluations/perceptions and that this type of inquiry has prescriptive value for design/planning....

  20. Some Ideas on the Microcomputer and the Information/Knowledge Workstation.

    ERIC Educational Resources Information Center

    Boon, J. A.; Pienaar, H.

    1989-01-01

    Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…

  1. Human machine interface to manually drive rhombic like vehicles in remote handling operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo

    2015-07-01

    In the thermonuclear experimental reactor ITER, a vehicle named CTS is designed to transport a container with activated components inside the buildings. In nominal operations, the CTS is autonomously guided under supervision. However, in some unexpected situations, such as in rescue and recovery operations, the autonomous mode must be overridden and the CTS must be remotely guided by an operator. The CTS is a rhombic-like vehicle, with two drivable and steerable wheels along its longitudinal axis, providing omni-directional capabilities. The rhombic kinematics correspond to four control variables, which are difficult to manage in manual mode operation. This paper proposes amore » Human Machine Interface (HMI) to remotely guide the vehicle in manual mode. The proposed solution is implemented using a HMI with an encoder connected to a micro-controller and an analog 2-axis joystick. Experimental results were obtained comparing the proposed solution with other controller devices in different scenarios and using a software platform that simulates the kinematics and dynamics of the vehicle. (authors)« less

  2. Task-Oriented, Naturally Elicited Speech (TONE) Database for the Force Requirements Expert System, Hawaii (FRESH)

    DTIC Science & Technology

    1988-09-01

    Group Subgroup Command and control; Computational linguistics; expert system voice recognition; man- machine interface; U.S. Government 19 Abstract...simulates the characteristics of FRESH on a smaller scale. This study assisted NOSC in developing a voice-recognition, man- machine interface that could...scale. This study assisted NOSC in developing a voice-recogni- tion, man- machine interface that could be used with TONE and upgraded at a later date

  3. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  4. Data Publication and Interoperability for Long Tail Researchers via the Open Data Repository's (ODR) Data Publisher.

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.

  5. Humans, Intelligent Technology, and Their Interface: A Study of Brown’s Point

    DTIC Science & Technology

    2017-12-01

    known about the role of drivers. When combining humans and intelligent technology (machines), such as self-driving vehicles, how people think about...disrupt the entire transportation industry and potentially change how society moves people and goods. The findings of the investigation are likely...The power of suggestion is very important to understand and consider when framing and bringing meaning to new technology, which points to looking at

  6. Hybrid soft computing systems for electromyographic signals analysis: a review.

    PubMed

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  7. Hybrid soft computing systems for electromyographic signals analysis: a review

    PubMed Central

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  8. Human Machine Interface Programming and Testing

    NASA Technical Reports Server (NTRS)

    Foster, Thomas Garrison

    2013-01-01

    Human Machine Interface (HMI) Programming and Testing is about creating graphical displays to mimic mission critical ground control systems in order to provide NASA engineers with the ability to monitor the health management of these systems in real time. The Health Management System (HMS) is an online interactive human machine interface system that monitors all Kennedy Ground Control Subsystem (KGCS) hardware in the field. The Health Management System is essential to NASA engineers because it allows remote control and monitoring of the health management systems of all the Programmable Logic Controllers (PLC) and associated field devices. KGCS will have equipment installed at the launch pad, Vehicle Assembly Building, Mobile Launcher, as well as the Multi-Purpose Processing Facility. I am designing graphical displays to monitor and control new modules that will be integrated into the HMS. The design of the display screen will closely mimic the appearance and functionality of the actual modules. There are many different field devices used to monitor health management and each device has its own unique set of health management related data, therefore each display must also have its own unique way to display this data. Once the displays are created, the RSLogix5000 application is used to write software that maps all the required data read from the hardware to the graphical display. Once this data is mapped to its corresponding display item, the graphical display and hardware device will be connected through the same network in order to test all possible scenarios and types of data the graphical display was designed to receive. Test Procedures will be written to thoroughly test out the displays and ensure that they are working correctly before being deployed to the field. Additionally, the Kennedy Ground Controls Subsystem's user manual will be updated to explain to the NASA engineers how to use the new module displays.

  9. Being human in a global age of technology.

    PubMed

    Whelton, Beverly J B

    2016-01-01

    This philosophical enquiry considers the impact of a global world view and technology on the meaning of being human. The global vision increases our awareness of the common bond between all humans, while technology tends to separate us from an understanding of ourselves as human persons. We review some advances in connecting as community within our world, and many examples of technological changes. This review is not exhaustive. The focus is to understand enough changes to think through the possibility of healthcare professionals becoming cyborgs, human-machine units that are subsequently neither human and nor machine. It is seen that human technology interfaces are a different way of interacting but do not change what it is to be human in our rational capacities of providing meaningful speech and freely chosen actions. In the highly technical environment of the ICU, expert nurses work in harmony with both the technical equipment and the patient. We used Heidegger to consider the nature of equipment, and Descartes to explore unique human capacities. Aristotle, Wallace, Sokolowski, and Clarke provide a summary of humanity as substantial and relational. © 2015 John Wiley & Sons Ltd.

  10. Nanoscale wear and machining behavior of nanolayer interfaces.

    PubMed

    Nie, Xueyuan; Zhang, Peng; Weiner, Anita M; Cheng, Yang-Tse

    2005-10-01

    An atomic force microscope was used to subnanometer incise a nanomultilayer to consequently expose individual nanolayers and interfaces on which sliding and scanning nanowear/machining have been performed. The letter reports the first observation on the nanoscale where (i) atomic debris forms in a collective manner, most-likely by deformation and rupture of atomic bonds, and (ii) the nanolayer interfaces possess a much higher wear resistance (desired for nanomachines) or lower machinability (not desired for nanomachining) than the layers.

  11. Closeout of CRADA JSA 2012S004: Chapter 5, Integrated Control System, of the document of the ESS Conceptual Design Report, publicly available at https://europeanspallationsource.se/accelerator-documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Satogata, Todd

    2013-04-22

    The integrated control system (ICS) is responsible for the whole ESS machine and facility: accelerator, target, neutron scattering instruments and conventional facilities. This unified approach keeps the costs of development, maintenance and support relatively low. ESS has selected a standardised, field-proven controls framework, the Experimental Physics and Industrial Control System (EPICS), which was originally developed jointly by Argonne and Los Alamos National Laboratories. Complementing this selection are best practices and experience from similar facilities regarding platform standardisation, control system development and device integration and commissioning. The components of ICS include the control system core, the control boxes, the BLED databasemore » management system, and the human machine interface. The control system core is a set of systems and tools that make it possible for the control system to provide required data, information and services to engineers, operators, physicists and the facility itself. The core components are the timing system that makes possible clock synchronisation across the facility, the machine protection system (MPS) and the personnel protection system (PPS) that prevent damage to the machine and personnel, and a set of control system services. Control boxes are servers that control a collection of equipment (for example a radio frequency cavity). The integrated control system will include many control boxes that can be assigned to one supplier, such as an internal team, a collaborating institute or a commercial vendor. This approach facilitates a clear division of responsibilities and makes integration much easier. A control box is composed of a standardised hardware platform, components, development tools and services. On the top level, it interfaces with the core control system components (timing, MPS, PPS) and with the human-machine interface. At the bottom, it interfaces with the equipment and parts of the facility through a set of analog and digital signals, real-time control loops and other communication buses. The ICS central data management system is named BLED (beam line element databases). BLED is a set of databases, tools and services that is used to store, manage and access data. It holds vital control system configuration and physics-related (lattice) information about the accelerator, target and instruments. It facilitates control system configuration by bringing together direct input-output controller (IOC) con guration and real-time data from proton and neutron beam line models. BLED also simplifies development and speeds up the code-test-debug cycle. The set of tools that access BLED will be tailored to the needs of different categories of users, such as ESS staff physicists, engineers, and operators; external partner laboratories; and visiting experimental instrument users. The human-machine interface is vital to providing a high-quality experience to ICS users. It encompasses a wide array of devices and software tools, from control room screens to engineer terminal windows; from beam physics data tools to post-mortem data analysis tools. It serves users with a wide range of skills from widely varied backgrounds. The Controls Group is developing a set of user profiles to accommodate this diverse range of use-cases and users.« less

  12. Structural health monitoring for bolt loosening via a non-invasive vibro-haptics human-machine cooperative interface

    NASA Astrophysics Data System (ADS)

    Pekedis, Mahmut; Mascerañas, David; Turan, Gursoy; Ercan, Emre; Farrar, Charles R.; Yildiz, Hasan

    2015-08-01

    For the last two decades, developments in damage detection algorithms have greatly increased the potential for autonomous decisions about structural health. However, we are still struggling to build autonomous tools that can match the ability of a human to detect and localize the quantity of damage in structures. Therefore, there is a growing interest in merging the computational and cognitive concepts to improve the solution of structural health monitoring (SHM). The main object of this research is to apply the human-machine cooperative approach on a tower structure to detect damage. The cooperation approach includes haptic tools to create an appropriate collaboration between SHM sensor networks, statistical compression techniques and humans. Damage simulation in the structure is conducted by releasing some of the bolt loads. Accelerometers are bonded to various locations of the tower members to acquire the dynamic response of the structure. The obtained accelerometer results are encoded in three different ways to represent them as a haptic stimulus for the human subjects. Then, the participants are subjected to each of these stimuli to detect the bolt loosened damage in the tower. Results obtained from the human-machine cooperation demonstrate that the human subjects were able to recognize the damage with an accuracy of 88 ± 20.21% and response time of 5.87 ± 2.33 s. As a result, it is concluded that the currently developed human-machine cooperation SHM may provide a useful framework to interact with abstract entities such as data from a sensor network.

  13. Distribution of man-machine controls in space teleoperation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1982-01-01

    The distribution of control between man and machine is dependent on the tasks, available technology, human performance characteristics and control goals. This dependency has very specific projections on systems designed for teleoperation in space. This paper gives a brief outline of the space-related issues and presents the results of advanced teleoperator research and development at the Jet Propulsion Laboratory (JPL). The research and development work includes smart sensors, flexible computer controls and intelligent man-machine interface devices in the area of visual displays and kinesthetic man-machine coupling in remote control of manipulators. Some of the development results have been tested at the Johnson Space Center (JSC) using the simulated full-scale Shuttle Remote Manipulator System (RMS). The research and development work for advanced space teleoperation is far from complete and poses many interdisciplinary challenges.

  14. Physiological properties of brain-machine interface input signals.

    PubMed

    Slutzky, Marc W; Flint, Robert D

    2017-08-01

    Brain-machine interfaces (BMIs), also called brain-computer interfaces (BCIs), decode neural signals and use them to control some type of external device. Despite many experimental successes and terrific demonstrations in animals and humans, a high-performance, clinically viable device has not yet been developed for widespread usage. There are many factors that impact clinical viability and BMI performance. Arguably, the first of these is the selection of brain signals used to control BMIs. In this review, we summarize the physiological characteristics and performance-including movement-related information, longevity, and stability-of multiple types of input signals that have been used in invasive BMIs to date. These include intracortical spikes as well as field potentials obtained inside the cortex, at the surface of the cortex (electrocorticography), and at the surface of the dura mater (epidural signals). We also discuss the potential for future enhancements in input signal performance, both by improving hardware and by leveraging the knowledge of the physiological characteristics of these signals to improve decoding and stability. Copyright © 2017 the American Physiological Society.

  15. Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra

    2017-05-14

    To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treatingmore » a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.« less

  16. Self-assembling fluidic machines

    NASA Astrophysics Data System (ADS)

    Grzybowski, Bartosz A.; Radkowski, Michal; Campbell, Christopher J.; Lee, Jessamine Ng; Whitesides, George M.

    2004-03-01

    This letter describes dynamic self-assembly of two-component rotors floating at the interface between liquid and air into simple, reconfigurable mechanical systems ("machines"). The rotors are powered by an external, rotating magnetic field, and their positions within the interface are controlled by: (i) repulsive hydrodynamic interactions between them and (ii) by localized magnetic fields produced by an array of small electromagnets located below the plane of the interface. The mechanical functions of the machines depend on the spatiotemporal sequence of activation of the electromagnets.

  17. We can't explore space without it - Common human space needs for exploration spaceflight

    NASA Technical Reports Server (NTRS)

    Daues, K. R.; Erwin, H. O.

    1992-01-01

    An overview is conducted of physiological, psychological, and human-interface requirements for manned spaceflight programs to establish common criteria. Attention is given to the comfort levels relevant to human support in exploration mission spacecraft and planetary habitats, and three comfort levels (CLs) are established. The levels include: (1) CL-1 for basic crew life support; (2) CL-2 for enabling the nominal completion of mission science; and (3) CL-3 which provides for enhanced life support and user-friendly interface systems. CL-2 support systems can include systems for EVA, workstations, and activity centers for repairs and enhanced utilization of payload and human/machine integration. CL-3 supports can be useful for maintaining crew psychological and physiological health as well as the design of comfortable and earthlike surroundings. While all missions require CL-1 commonality, CL-2 commonality is required only for EVA systems, display nomenclature, and restraint designs.

  18. Final Report of Work Done on Contract NONR-4010(03).

    ERIC Educational Resources Information Center

    Chapanis, Alphonse

    The 24 papers listed report the findings of a study funded by the Office of Naval Research. The study concentrated on the sensory and cognitive factors in man-machine interfaces. The papers are categorized into three groups: perception studies, human engineering studies, and methodological papers. A brief summary of the most noteworthy findings in…

  19. Reducing lumber thickness variation using real-time statistical process control

    Treesearch

    Thomas M. Young; Brian H. Bond; Jan Wiedenbeck

    2002-01-01

    A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...

  20. User-Based Information Retrieval System Interface Evaluation: An Examination of an On-Line Public Access Catalog.

    ERIC Educational Resources Information Center

    Hert, Carol A.; Nilan, Michael S.

    1991-01-01

    Presents preliminary data that characterizes the relationship between what users say they are trying to accomplish when using an online public access catalog (OPAC) and their perceptions of what input to give the system. Human-machine interaction is discussed, and appropriate methods for evaluating information retrieval systems are considered. (18…

  1. Massachusetts Institute of Technology Consortium Agreement

    DTIC Science & Technology

    1999-03-01

    In this, our second progress report of the Phase Two Home Automation and Healthcare Consortium at the Brit and Alex d’Arbeloff Laboratory for...Covered here are the diverse fields of home automation and healthcare research, ranging from human modeling, patient monitoring, and diagnosis to new...sensors and actuators, physical aids, human-machine interface and home automation infrastructure. These results will be presented at the upcoming General Assembly of the Consortium held on October 27-October 30, 1998 at MIT.

  2. Flexible and stretchable electronics for biointegrated devices.

    PubMed

    Kim, Dae-Hyeong; Ghaffari, Roozbeh; Lu, Nanshu; Rogers, John A

    2012-01-01

    Advances in materials, mechanics, and manufacturing now allow construction of high-quality electronics and optoelectronics in forms that can readily integrate with the soft, curvilinear, and time-dynamic surfaces of the human body. The resulting capabilities create new opportunities for studying disease states, improving surgical procedures, monitoring health/wellness, establishing human-machine interfaces, and performing other functions. This review summarizes these technologies and illustrates their use in forms integrated with the brain, the heart, and the skin.

  3. An All-Silk-Derived Dual-Mode E-skin for Simultaneous Temperature-Pressure Detection.

    PubMed

    Wang, Chunya; Xia, Kailun; Zhang, Mingchao; Jian, Muqiang; Zhang, Yingying

    2017-11-15

    Flexible skin-mimicking electronics are highly desired for development of smart human-machine interfaces and wearable human-health monitors. Human skins are able to simultaneously detect different information, such as touch, friction, temperature, and humidity. However, due to the mutual interferences of sensors with different functions, it is still a big challenge to fabricate multifunctional electronic skins (E-skins). Herein, a combo temperature-pressure E-skin is reported through assembling a temperature sensor and a strain sensor in both of which flexible and transparent silk-nanofiber-derived carbon fiber membranes (SilkCFM) are used as the active material. The temperature sensor presents high temperature sensitivity of 0.81% per centigrade. The strain sensor shows an extremely high sensitivity with a gauge factor of ∼8350 at 50% strain, enabling the detection of subtle pressure stimuli that induce local strain. Importantly, the structure of the SilkCFM in each sensor is designed to be passive to other stimuli, enabling the integrated E-skin to precisely detect temperature and pressure at the same time. It is demonstrated that the E-skin can detect and distinguish exhaling, finger pressing, and spatial distribution of temperature and pressure, which cannot be realized using single mode sensors. The remarkable performance of the silk-based combo temperature-pressure sensor, together with its green and large-scalable fabrication process, promising its applications in human-machine interfaces and soft electronics.

  4. An extremely lightweight fingernail worn prosthetic interface device

    NASA Astrophysics Data System (ADS)

    Yetkin, Oguz; Ahluwalia, Simranjit; Silva, Dinithi; Kasi-Okonye, Isioma; Volker, Rachael; Baptist, Joshua R.; Popa, Dan O.

    2016-05-01

    Upper limb prosthetics are currently operated using several electromyography sensors mounted on an amputee's residual limb. In order for any prosthetic driving interface to be widely adopted, it needs to be responsive, lightweight, and out of the way when not being used. In this paper we discuss the possibility of replacing such electrodes with fingernail optical sensor systems mounted on the sound limb. We present a prototype device that can detect pinch gestures and communicate with the prosthetic system. The device detects the relative position of fingers to each other by measuring light transmitted via tissue. Applications are not limited to prosthetic control, but can be extended to other human-machine interfaces.

  5. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  6. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  7. Human-machine interface (HMI) report for 241-SY-101 data acquisition [and control] system (DACS) upgrade study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truitt, R.W.

    1997-10-22

    This report provides an independent evaluation of information for a Windows based Human Machine Interface (HMI) to replace the existing DOS based Iconics HMI currently used in the Data Acquisition and Control System (DACS) used at Tank 241-SY-101. A fundamental reason for this evaluation is because of the difficulty of maintaining the system with obsolete, unsupported software. The DACS uses a software operator interface (Genesis for DOS HMI) that is no longer supported by its manufacturer, Iconics. In addition to its obsolescence, it is complex and difficult to train additional personnel on. The FY 1997 budget allocated $40K for phasemore » 1 of a software/hardware upgrade that would have allowed the old DOS based system to be replaced by a current Windows based system. Unfortunately, budget constraints during FY 1997 has prompted deferral of the upgrade. The upgrade needs to be performed at the earliest possible time, before other failures render the system useless. Once completed, the upgrade could alleviate other concerns: spare pump software may be able to be incorporated into the same software as the existing pump, thereby eliminating the parallel path dilemma; and the newer, less complex software should expedite training of future personnel, and in the process, require that less technical time be required to maintain the system.« less

  8. Highly Stretchable Core-Sheath Fibers via Wet-Spinning for Wearable Strain Sensors.

    PubMed

    Tang, Zhenhua; Jia, Shuhai; Wang, Fei; Bian, Changsheng; Chen, Yuyu; Wang, Yonglin; Li, Bo

    2018-02-21

    Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.

  9. Ant-Based Cyber Defense (also known as

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn Fink, PNNL

    2015-09-29

    ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.

  10. Interface Metaphors for Interactive Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Robert J.; Blaha, Leslie M.

    To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less

  11. The use of affective interaction design in car user interfaces.

    PubMed

    Gkouskos, Dimitrios; Chen, Fang

    2012-01-01

    Recent developments in the car industry have put Human Machine Interfaces under the spotlight. Developing gratifying human-car interactions has become one of the more prominent areas that car manufacturers want to invest in. However, concepts like emotional design remain foreign to the industry. In this study 12 experts on the field of automobile HMI design were interviewed in order to investigate their needs and opinions of emotional design. Results show that emotional design has yet to be introduced for this context of use. Designers need a tool customized for the intricacies of the car HMI field that can provide them with support and guidance so that they can create emotionally attractive experiences for drivers and passengers alike.

  12. Feasibility of task-specific brain-machine interface training for upper-extremity paralysis in patients with chronic hemiparetic stroke.

    PubMed

    Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen

    2018-01-10

    Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.

  13. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  14. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  15. Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma.

    PubMed

    Wrzeszczynski, Kazimierz O; Frank, Mayu O; Koyama, Takahiko; Rhrissorrakrai, Kahn; Robine, Nicolas; Utro, Filippo; Emde, Anne-Katrin; Chen, Bo-Juen; Arora, Kanika; Shah, Minita; Vacic, Vladimir; Norel, Raquel; Bilal, Erhan; Bergmann, Ewa A; Moore Vogel, Julia L; Bruce, Jeffrey N; Lassman, Andrew B; Canoll, Peter; Grommes, Christian; Harvey, Steve; Parida, Laxmi; Michelini, Vanessa V; Zody, Michael C; Jobanputra, Vaidehi; Royyuru, Ajay K; Darnell, Robert B

    2017-08-01

    To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts. The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible. NCT02725684.

  16. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  17. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  18. Low Latency Messages on Distributed Memory Multiprocessors

    DOE PAGES

    Rosing, Matt; Saltz, Joel

    1995-01-01

    This article describes many of the issues in developing an efficient interface for communication on distributed memory machines. Although the hardware component of message latency is less than 1 ws on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 μs. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. This article describes several tests performed and many of the issues involvedmore » in supporting low latency messages on distributed memory machines.« less

  19. Reprint of: Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-06-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  20. Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-04-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  1. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  2. An Experience of Teaching for Learning by Observation: Remote-Controlled Experiments on Electrical Circuits

    ERIC Educational Resources Information Center

    Kong, Siu Cheung; Yeung, Yau Yuen; Wu, Xian Qiu

    2009-01-01

    In order to facilitate senior primary school students in Hong Kong to engage in learning by observation of the phenomena related to electrical circuits, a design of a specific courseware system, of which the interactive human-machine interface was created with the use of an open-source software called the LabVNC, for conducting online…

  3. Designing Microstructures/Structures for Desired Functional Material and Local Fields

    DTIC Science & Technology

    2015-12-02

    utilized to engineer multifunctional soft materials for multi-sensing, multi- actuating , human-machine interfaces. [3] Establish a theoretical framework...model for surface elasticity, (ii) derived a new type of Maxwell stress in soft materials due to quantum mechanical-elasticity coupling and...elucidated its ramification in engineering multifunctional soft materials, and (iii) demonstrated the possibility of concurrent magnetoelectricity and

  4. Technology Roadmap Instrumentation, Control, and Human-Machine Interface to Support DOE Advanced Nuclear Energy Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donald D Dudenhoeffer; Burce P Hallbert

    Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less

  5. Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control.

    PubMed

    Thorp, Elias B; Larson, Eric; Stepp, Cara E

    2014-01-01

    The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.

  6. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  7. The JPL telerobot operator control station. Part 1: Hardware

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Tower, John T.; Hunka, George W.; Vansant, Glenn J.

    1989-01-01

    The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The hardware design, system architecture, and its integration and interface with the rest of the Telerobot Demonstrator System are discussed.

  8. Analysis and prediction of meal motion by EMG signals

    NASA Astrophysics Data System (ADS)

    Horihata, S.; Iwahara, H.; Yano, K.

    2007-12-01

    The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.

  9. Supervisory Control of Multiple Uninhabited Systems - Methodologies and Enabling Human-Robot Interface Technologies (Commande et surveillance de multiples systemes sans pilote - Methodologies et technologies habilitantes d’interfaces homme-machine)

    DTIC Science & Technology

    2012-12-01

    FRANCE 6.1 DATES SMAART (2006 – 2008) and SUSIE (2009 – 2011). 6.2 LOCATION Brest – Nancy – Paris (France). 6.3 SCENARIO/TASKS The setting...Agency (RTA), a dedicated staff with its headquarters in Neuilly, near Paris , France. In order to facilitate contacts with the military users and...Mission Delay for the Helicopter 8-12 Table 8-2 Assistant Interventions and Commander’s Reactions 8-13 Table 10-1 Partial LOA Matrix as Originally

  10. Development and Implementation of a Simplified Tool Measuring System

    NASA Astrophysics Data System (ADS)

    Chen, Jenn-Yih; Lee, Bean-Yin; Lee, Kuang-Chyi; Chen, Zhao-Kai

    2010-01-01

    This paper presents a simplified system for measuring geometric profiles of end mills. Firstly, a CCD camera was used to capture images of cutting tools. Then, an image acquisition card with the encoding function was adopted to convert the source of image into an USB port of a PC, and the image could be shown on a monitor. In addition, two linear scales were mounted on the X-Y table for positioning and measuring purposes. The signals of the linear scales were transmitted into a 4-axis quadrature encoder with 4-channel counter card for position monitoring. The C++ Builder was utilized for designing the user friendly human machine interface of the measuring system of tools. There is a cross line on the image of the interface to show a coordinate for the position measurement. Finally, a well-known tool measuring and inspection machine was employed for the measuring standard. This study compares the difference of the measuring results by using the machine and the proposed system. Experimental results show that the percentage of measuring error is acceptable for some geometric parameters of the square or ball nose end mills. Therefore, the results demonstrate the effectiveness of the presented approach.

  11. Prosthetic EMG control enhancement through the application of man-machine principles

    NASA Technical Reports Server (NTRS)

    Simcox, W. A.

    1977-01-01

    An area in medicine that appears suitable to man-machine principles is rehabilitation research, particularly when the motor aspects of the body are involved. If one considers the limb, whether functional or not, as the machine, the brain as the controller and the neuromuscular system as the man-machine interface, the human body is reduced to a man-machine system that can benefit from the principles behind such systems. The area of rehabilitation that this paper deals with is that of an arm amputee and his prosthetic device. Reducing this area to its man-machine basics, the problem becomes one of attaining natural multiaxis prosthetic control using Electromyographic activity (EMG) as the means of communication between man and prothesis. In order to use EMG as the communication channel it must be amplified and processed to yield a high information signal suitable for control. The most common processing scheme employed is termed Mean Value Processing. This technique for extracting the useful EMG signal consists of a differential to single ended conversion to the surface activity followed by a rectification and smoothing.

  12. Low latency messages on distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Saltz, Joel

    1993-01-01

    Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.

  13. Automated visual imaging interface for the plant floor

    NASA Astrophysics Data System (ADS)

    Wutke, John R.

    1991-03-01

    The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.

  14. A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG.

    PubMed

    Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy

    2013-03-01

    This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.

  15. Clinical Outcome of Hydroxyapatite Coated, Bioactive Glass Coated, and Machined Ti6Al4V Threaded Dental Implant in Human Jaws: A Short-Term Comparative Study.

    PubMed

    Mistry, Surajit; Roy, Rajiv; Kundu, Biswanath; Datta, Someswar; Kumar, Manoj; Chanda, Abhijit; Kundu, Debabrata

    2016-04-01

    Growing aspect of endosseous implant research is focused on surface modification of dental implants for the purpose of improving osseointegration. The aim of this study was to evaluate and compare the clinical outcome (ie, osseointegration) of hydroxyapatite coated, bioactive glass coated and machined titanium alloy threaded dental implants in human jaw bone after implantation. One hundred twenty-six implants (45 hydroxyapatite coated, 41 bioactive glass coated, and 40 machined titanium implants) have been placed in incisor areas of 62 adult patients. Outcome was assessed up to 12 months after prosthetic rehabilitation using different clinical and radiological parameters. Surface roughness of failed implants was analyzed by laser profilometer. Hydroxyapatite and bioactive glass coating materials were nontoxic and biocompatible. Least marginal bone loss in radiograph, significantly higher (P < 0.05) interface radiodensity, and less interfacial gaps were observed in computed tomography with bioactive glass coated implants at anterior maxilla compared to other 2 types. Bioactive glass coated implants are equally safe and effective as hydroxyapatite coated and machined titanium implants in achieving osseointegration; therefore, can be effectively used as an alternative coating material for dental implants.

  16. Three-dimensional anthropometric techniques applied to the fabrication of burn masks and the quantification of wound healing

    NASA Astrophysics Data System (ADS)

    Whitestone, Jennifer J.; Geisen, Glen R.; McQuiston, Barbara K.

    1997-03-01

    Anthropometric surveys conducted by the military provide comprehensive human body measurement data that are human interface requirements for successful mission performance of weapon systems, including cockpits, protective equipment, and clothing. The application of human body dimensions to model humans and human-machine performance begins with engineering anthropometry. There are two critical elements to engineering anthropometry: data acquisition and data analysis. First, the human body is captured dimensionally with either traditional anthropometric tools, such as calipers and tape measures, or with advanced image acquisition systems, such as a laser scanner. Next, numerous statistical analysis tools, such as multivariate modeling and feature envelopes, are used to effectively transition these data for design and evaluation of equipment and work environments. Recently, Air Force technology transfer allowed researchers at the Computerized Anthropometric Research and Design (CARD) Laboratory at Wright-Patterson Air Force Base to work with the Dayton, Ohio area medical community in assessing the rate of wound healing and improving the fit of total contract burn masks. This paper describes the successful application of CARD Lab engineering anthropometry to two medically oriented human interface problems.

  17. [The current state of the brain-computer interface problem].

    PubMed

    Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A

    2015-01-01

    It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.

  18. Intelligent systems and advanced user interfaces for design, operation, and maintenance of command management systems

    NASA Technical Reports Server (NTRS)

    Potter, William J.; Mitchell, Christine M.

    1993-01-01

    Historically, command management systems (CMS) have been large and expensive spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as to develop a more generic CMS system. New technologies, in addition to a core CMS common to a range of spacecraft, may facilitate the training and enhance the efficiency of CMS operations. Current mission operations center (MOC) hardware and software include Unix workstations, the C/C++ programming languages, and an X window interface. This configuration provides the power and flexibility to support sophisticated and intelligent user interfaces that exploit state-of-the-art technologies in human-machine interaction, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of these issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, human-machine systems design and analysis tools (e.g., operator and designer models), and human-computer interaction tools (e.g., graphics, visualization, and animation) may provide significant savings in the design, operation, and maintenance of the CMS for a specific spacecraft as well as continuity for CMS design and development across spacecraft. The first six months of this research saw a broad investigation by Georgia Tech researchers into the function, design, and operation of current and planned command management systems at Goddard Space Flight Center. As the first step, the researchers attempted to understand the current and anticipated horizons of command management systems at Goddard. Preliminary results are given on CMS commonalities and causes of low re-use, and methods are proposed to facilitate increased re-use.

  19. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  20. Force reflecting hand controller

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas A. (Inventor); Snow, Edward R. (Inventor); Townsend, William T. (Inventor)

    1993-01-01

    A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs.

  1. Computation of the Distribution of the Fiber-Matrix Interface Cracks in the Edge Trimming of CFRP

    NASA Astrophysics Data System (ADS)

    Wang, Fu-ji; Zhang, Bo-yu; Ma, Jian-wei; Bi, Guang-jian; Hu, Hai-bo

    2018-04-01

    Edge trimming is commonly used to bring the CFRP components to right dimension and shape in aerospace industries. However, various forms of undesirable machining damage occur frequently which will significantly decrease the material performance of CFRP. The damage is difficult to predict and control due to the complicated changing laws, causing unsatisfactory machining quality of CFRP components. Since the most of damage has the same essence: the fiber-matrix interface cracks, this study aims to calculate the distribution of them in edge trimming of CFRP, thereby to obtain the effects of the machining parameters, which could be helpful to guide the optimal selection of the machining parameters in engineering. Through the orthogonal cutting experiments, the quantitative relation between the fiber-matrix interface crack depth and the fiber cutting angle, cutting depth as well as cutting speed is established. According to the analysis on material removal process on any location of the workpiece in edge trimming, the instantaneous cutting parameters are calculated, and the formation process of the fiber-matrix interface crack is revealed. Finally, the computational method for the fiber-matrix interface cracks in edge trimming of CFRP is proposed. Upon the computational results, it is found that the fiber orientations of CFRP workpieces is the most significant factor on the fiber-matrix interface cracks, which can not only change the depth of them from micrometers to millimeters, but control the distribution image of them. Other machining parameters, only influence the fiber-matrix interface cracks depth but have little effect on the distribution image.

  2. The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces.

    PubMed

    O'Shea, Daniel J; Trautmann, Eric; Chandrasekaran, Chandramouli; Stavisky, Sergey; Kao, Jonathan C; Sahani, Maneesh; Ryu, Stephen; Deisseroth, Karl; Shenoy, Krishna V

    2017-01-01

    A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. An EOG-Based Human-Machine Interface for Wheelchair Control.

    PubMed

    Huang, Qiyun; He, Shenghong; Wang, Qihong; Gu, Zhenghui; Peng, Nengneng; Li, Kai; Zhang, Yuandong; Shao, Ming; Li, Yuanqing

    2017-07-27

    Non-manual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG)-based HMI is proposed for wheelchair control. Thirteen flashing buttons are presented in the graphical user interface (GUI), and each of the buttons corresponds to a command. These buttons flash on a one-by-one manner in a pre-defined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. For healthy subjects/patients with spinal cord injuries (SCIs), the proposed HMI achieved an average accuracy of 96.7%/91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Using only one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. The proposed HMI provides a novel non-manual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR and fewer electrodes.

  4. Histological Evaluation of a Chronically-implanted Electrocorticographic Electrode Grid in a Non-human Primate

    PubMed Central

    Degenhart, Alan D.; Eles, James; Dum, Richard; Mischel, Jessica L.; Smalianchuk, Ivan; Endler, Bridget; Ashmore, Robin C.; Tyler-Kabara, Elizabeth C.; Hatsopoulos, Nicholas G.; Wang, Wei; Batista, Aaron P.; Cui, X. Tracy

    2016-01-01

    Electrocorticography (ECoG), used as a neural recording modality for brain-machine interfaces (BMIs), potentially allows for field potentials to be recorded from the surface of the cerebral cortex for long durations without suffering the host-tissue reaction to the extent that it is common with intracortical microelectrodes. Though the stability of signals obtained from chronically-implanted ECoG electrodes has begun receiving attention, to date little work has characterized the effects of long-term implantation of ECoG electrodes on underlying cortical tissue. We implanted a high-density ECoG electrode grid subdurally over cortical motor areas of a Rhesus macaque for 666 days. Histological analysis revealed minimal damage to the cortex underneath the implant, though the grid itself was encapsulated in collagenous tissue. We observed macrophages and foreign body giant cells at the tissue-array interface, indicative of a stereotypical foreign body response. Despite this encapsulation, cortical modulation during reaching movements was observed more than 18 months post-implantation. These results suggest that ECoG may provide a means by which stable chronic cortical recordings can be obtained with comparatively little tissue damage, facilitating the development of clinically-viable brain-machine interface systems. PMID:27351722

  5. Research interface on a programmable ultrasound scanner.

    PubMed

    Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin

    2008-07-01

    Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.

  6. Automatic Speech Recognition in Air Traffic Control: a Human Factors Perspective

    NASA Technical Reports Server (NTRS)

    Karlsson, Joakim

    1990-01-01

    The introduction of Automatic Speech Recognition (ASR) technology into the Air Traffic Control (ATC) system has the potential to improve overall safety and efficiency. However, because ASR technology is inherently a part of the man-machine interface between the user and the system, the human factors issues involved must be addressed. Here, some of the human factors problems are identified and related methods of investigation are presented. Research at M.I.T.'s Flight Transportation Laboratory is being conducted from a human factors perspective, focusing on intelligent parser design, presentation of feedback, error correction strategy design, and optimal choice of input modalities.

  7. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma M.; Hendrix, Val; Chertkov, Michael

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less

  8. Experimental Characterization and Modeling of Thermal Contact Resistance of Electric Machine Stator-to-Cooling Jacket Interface Under Interference Fit Loading

    DOE PAGES

    Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor; ...

    2018-05-08

    Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less

  9. Experimental Characterization and Modeling of Thermal Contact Resistance of Electric Machine Stator-to-Cooling Jacket Interface Under Interference Fit Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor

    Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less

  10. The desktop interface in intelligent tutoring systems

    NASA Technical Reports Server (NTRS)

    Baudendistel, Stephen; Hua, Grace

    1987-01-01

    The interface between an Intelligent Tutoring System (ITS) and the person being tutored is critical to the success of the learning process. If the interface to the ITS is confusing or non-supportive of the tutored domain, the effectiveness of the instruction will be diminished or lost entirely. Consequently, the interface to an ITS should be highly integrated with the domain to provide a robust and semantically rich learning environment. In building an ITS for ZetaLISP on a LISP Machine, a Desktop Interface was designed to support a programming learning environment. Using the bitmapped display, windows, and mouse, three desktops were designed to support self-study and tutoring of ZetaLISP. Through organization, well-defined boundaries, and domain support facilities, the desktops provide substantial flexibility and power for the student and facilitate learning ZetaLISP programming while screening the student from the complex LISP Machine environment. The student can concentrate on learning ZetaLISP programming and not on how to operate the interface or a LISP Machine.

  11. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    PubMed

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.

  12. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources

    PubMed Central

    Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734

  13. Graphical user interfaces for symbol-oriented database visualization and interaction

    NASA Astrophysics Data System (ADS)

    Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger

    1997-04-01

    In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.

  14. Assessment of Human Factors

    NASA Technical Reports Server (NTRS)

    Mount, Frances; Foley, Tico

    1999-01-01

    Human Factors Engineering, often referred to as Ergonomics, is a science that applies a detailed understanding of human characteristics, capabilities, and limitations to the design, evaluation, and operation of environments, tools, and systems for work and daily living. Human Factors is the investigation, design, and evaluation of equipment, techniques, procedures, facilities, and human interfaces, and encompasses all aspects of human activity from manual labor to mental processing and leisure time enjoyments. In spaceflight applications, human factors engineering seeks to: (1) ensure that a task can be accomplished, (2) maintain productivity during spaceflight, and (3) ensure the habitability of the pressurized living areas. DSO 904 served as a vehicle for the verification and elucidation of human factors principles and tools in the microgravity environment. Over six flights, twelve topics were investigated. This study documented the strengths and limitations of human operators in a complex, multifaceted, and unique environment. By focusing on the man-machine interface in space flight activities, it was determined which designs allow astronauts to be optimally productive during valuable and costly space flights. Among the most promising areas of inquiry were procedures, tools, habitat, environmental conditions, tasking, work load, flexibility, and individual control over work.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.

  16. Advances in neuroprosthetic learning and control.

    PubMed

    Carmena, Jose M

    2013-01-01

    Significant progress has occurred in the field of brain-machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action.

  17. Advances in Neuroprosthetic Learning and Control

    PubMed Central

    Carmena, Jose M.

    2013-01-01

    Significant progress has occurred in the field of brain–machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action. PMID:23700383

  18. Insect-machine interface based neurocybernetics.

    PubMed

    Bozkurt, Alper; Gilmour, Robert F; Sinha, Ayesa; Stern, David; Lal, Amit

    2009-06-01

    We present details of a novel bioelectric interface formed by placing microfabricated probes into insect during metamorphic growth cycles. The inserted microprobes emerge with the insect where the development of tissue around the electronics during the pupal development allows mechanically stable and electrically reliable structures coupled to the insect. Remarkably, the insects do not react adversely or otherwise to the inserted electronics in the pupae stage, as is true when the electrodes are inserted in adult stages. We report on the electrical and mechanical characteristics of this novel bioelectronic interface, which we believe would be adopted by many investigators trying to investigate biological behavior in insects with negligible or minimal traumatic effect encountered when probes are inserted in adult stages. This novel insect-machine interface also allows for hybrid insect-machine platforms for further studies. As an application, we demonstrate our first results toward navigation of flight in moths. When instrumented with equipment to gather information for environmental sensing, such insects potentially can assist man to monitor the ecosystems that we share with them for sustainability. The simplicity of the optimized surgical procedure we invented allows for batch insertions to the insect for automatic and mass production of such hybrid insect-machine platforms. Therefore, our bioelectronic interface and hybrid insect-machine platform enables multidisciplinary scientific and engineering studies not only to investigate the details of insect behavioral physiology but also to control it.

  19. Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma

    PubMed Central

    Wrzeszczynski, Kazimierz O.; Frank, Mayu O.; Koyama, Takahiko; Rhrissorrakrai, Kahn; Robine, Nicolas; Utro, Filippo; Emde, Anne-Katrin; Chen, Bo-Juen; Arora, Kanika; Shah, Minita; Vacic, Vladimir; Norel, Raquel; Bilal, Erhan; Bergmann, Ewa A.; Moore Vogel, Julia L.; Bruce, Jeffrey N.; Lassman, Andrew B.; Canoll, Peter; Grommes, Christian; Harvey, Steve; Parida, Laxmi; Michelini, Vanessa V.; Zody, Michael C.; Jobanputra, Vaidehi; Royyuru, Ajay K.

    2017-01-01

    Objective: To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Methods: Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. Results: More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts. Conclusions: The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible. ClinicalTrials.gov identifier: NCT02725684. PMID:28740869

  20. Mimicking Neurotransmitter Release in Chemical Synapses via Hysteresis Engineering in MoS2 Transistors.

    PubMed

    Arnold, Andrew J; Razavieh, Ali; Nasr, Joseph R; Schulman, Daniel S; Eichfeld, Chad M; Das, Saptarshi

    2017-03-28

    Neurotransmitter release in chemical synapses is fundamental to diverse brain functions such as motor action, learning, cognition, emotion, perception, and consciousness. Moreover, improper functioning or abnormal release of neurotransmitter is associated with numerous neurological disorders such as epilepsy, sclerosis, schizophrenia, Alzheimer's disease, and Parkinson's disease. We have utilized hysteresis engineering in a back-gated MoS 2 field effect transistor (FET) in order to mimic such neurotransmitter release dynamics in chemical synapses. All three essential features, i.e., quantal, stochastic, and excitatory or inhibitory nature of neurotransmitter release, were accurately captured in our experimental demonstration. We also mimicked an important phenomenon called long-term potentiation (LTP), which forms the basis of human memory. Finally, we demonstrated how to engineer the LTP time by operating the MoS 2 FET in different regimes. Our findings could provide a critical component toward the design of next-generation smart and intelligent human-like machines and human-machine interfaces.

  1. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  2. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  3. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    PubMed

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  4. Application of the user-centred design process according ISO 9241-210 in air traffic control.

    PubMed

    König, Christina; Hofmann, Thomas; Bruder, Ralph

    2012-01-01

    Designing a usable human machine interface for air traffic control is challenging and should follow approved methods. The ISO 9241-210 standard promises high usability of products by integrating future users and following an iterative process. This contribution describes the proceeding and first results of the analysis and application of ISO 9241-210 to develop a planning tool for air traffic controllers.

  5. An evaluation of the ATM man/machine interface. Phase 3: Analysis of SL-3 and SL-4 data

    NASA Technical Reports Server (NTRS)

    Bathurst, J. R., Jr.; Pain, R. F.; Ludewig, D. B.

    1974-01-01

    The functional adequacy of human factored crew operated systems under operational zero-gravity conditions is considered. Skylab ATM experiment operations generated sufficient telemetry and voice transcript data to support such an assessment effort. Discussions are presented pertaining to the methodology and procedures used to evaluate the hardware, training and directive aspects of Skylab 3 and Skylab 4 manned ATM experiment operations.

  6. Stretchable human-machine interface based on skin-conformal sEMG electrodes with self-similar geometry

    NASA Astrophysics Data System (ADS)

    Dong, Wentao; Zhu, Chen; Hu, Wei; Xiao, Lin; Huang, Yong'an

    2018-01-01

    Current stretchable surface electrodes have attracted increasing attention owing to their potential applications in biological signal monitoring, wearable human-machine interfaces (HMIs) and the Internet of Things. The paper proposed a stretchable HMI based on a surface electromyography (sEMG) electrode with a self-similar serpentine configuration. The sEMG electrode was transfer-printed onto the skin surface conformally to monitor biological signals, followed by signal classification and controlling of a mobile robot. Such electrodes can bear rather large deformation (such as >30%) under an appropriate areal coverage. The sEMG electrodes have been used to record electrophysiological signals from different parts of the body with sharp curvature, such as the index finger, back of the neck and face, and they exhibit great potential for HMI in the fields of robotics and healthcare. The electrodes placed onto the two wrists would generate two different signals with the fist clenched and loosened. It is classified to four kinds of signals with a combination of the gestures from the two wrists, that is, four control modes. Experiments demonstrated that the electrodes were successfully used as an HMI to control the motion of a mobile robot remotely. Project supported by the National Natural Science Foundation of China (Nos. 51635007, 91323303).

  7. From pilot's associate to satellite controller's associate

    NASA Technical Reports Server (NTRS)

    Neyland, David L.; Lizza, Carl; Merkel, Philip A.

    1992-01-01

    Associate technology is an emerging engineering discipline wherein intelligent automation can significantly augment the performance of man-machine systems. An associate system is one that monitors operator activity and adapts its operational behavior accordingly. Associate technology is most effectively applied when mapped into management of the human-machine interface and display-control loop in typical manned systems. This paper addresses the potential for application of associate technology into the arena of intelligent command and control of satellite systems, from diagnosis of onboard and onground of satellite systems fault conditions, to execution of nominal satellite control functions. Rather than specifying a specific solution, this paper draws parallels between the Pilot's Associate concept and the domain of satellite control.

  8. Kinematic design to improve ergonomics in human machine interaction.

    PubMed

    Schiele, André; van der Helm, Frans C T

    2006-12-01

    This paper introduces a novel kinematic design paradigm for ergonomic human machine interaction. Goals for optimal design are formulated generically and applied to the mechanical design of an upper-arm exoskeleton. A nine degree-of-freedom (DOF) model of the human arm kinematics is presented and used to develop, test, and optimize the kinematic structure of an human arm interfacing exoskeleton. The resulting device can interact with an unprecedented portion of the natural limb workspace, including motions in the shoulder-girdle, shoulder, elbow, and the wrist. The exoskeleton does not require alignment to the human joint axes, yet is able to actuate each DOF of our redundant limb unambiguously and without reaching into singularities. The device is comfortable to wear and does not create residual forces if misalignments exist. Implemented in a rehabilitation robot, the design features of the exoskeleton could enable longer lasting training sessions, training of fully natural tasks such as activities of daily living and shorter dress-on and dress-off times. Results from inter-subject experiments with a prototype are presented, that verify usability over the entire workspace of the human arm, including shoulder and shoulder girdle.

  9. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  10. Development of the FITS tools package for multiple software environments

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; Blackburn, J. K.

    1992-01-01

    The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.

  11. An assisted navigation training framework based on judgment theory using sparse and discrete human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano

    2009-01-01

    This paper aims to present a new framework to train people with severe motor disabilities steering an assisted mobile robot (AMR), such as a powered wheelchair. Users with high level of motor disabilities are not able to use standard HMIs, which provide a continuous command signal (e. g. standard joystick). For this reason HMIs providing a small set of simple commands, which are sparse and discrete in time must be used (e. g. scanning interface, or brain computer interface), making very difficult to steer the AMR. In this sense, the assisted navigation training framework (ANTF) is designed to train users driving the AMR, in indoor structured environments, using this type of HMIs. Additionally it provides user characterization on steering the robot, which will later be used to adapt the AMR navigation system to human competence steering the AMR. A rule-based lens (RBL) model is used to characterize users on driving the AMR. Individual judgment performance choosing the best manoeuvres is modeled using a genetic-based policy capturing (GBPC) technique characterized to infer non-compensatory judgment strategies from human decision data. Three user models, at three different learning stages, using the RBL paradigm, are presented.

  12. Formal analysis and automatic generation of user interfaces: approach, methodology, and an algorithm.

    PubMed

    Heymann, Michael; Degani, Asaf

    2007-04-01

    We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.

  13. Flying Unmanned Aircraft: A Pilot's Perspective

    NASA Technical Reports Server (NTRS)

    Pestana, Mark E.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) is pioneering various Unmanned Aircraft System (UAS) technologies and procedures which may enable routine access to the National Airspace System (NAS), with an aim for Next Gen NAS. These tools will aid in the development of technologies and integrated capabilities that will enable high value missions for science, security, and defense, and open the door to low-cost, extreme-duration, stratospheric flight. A century of aviation evolution has resulted in accepted standards and best practices in the design of human-machine interfaces, the displays and controls of which serve to optimize safe and efficient flight operations and situational awareness. The current proliferation of non-standard, aircraft-specific flight crew interfaces in UAS, coupled with the inherent limitations of operating UAS without in-situ sensory input and feedback (aural, visual, and vestibular cues), has increased the risk of mishaps associated with the design of the "cockpit." The examples of current non- or sub- standard design features range from "annoying" and "inefficient", to those that are difficult to manipulate or interpret in a timely manner, as well as to those that are "burdensome" and "unsafe." A concerted effort is required to establish best practices and standards for the human-machine interfaces, for the pilot as well as the air traffic controller. In addition, roles, responsibilities, knowledge, and skill sets are subject to redefining the terms, "pilot" and "air traffic controller", with respect to operating UAS, especially in the Next-Gen NAS. The knowledge, skill sets, training, and qualification standards for UAS operations must be established, and reflect the aircraft-specific human-machine interfaces and control methods. NASA s recent experiences flying its MQ-9 Ikhana in the NAS for extended duration, has enabled both NASA and the FAA to realize the full potential for UAS, as well as understand the implications of current limitations. Ikhana is a Predator-B/Reaper UAS, built by General Atomics, Aeronautical Systems, Inc., and modified for research. Since 2007, the aircraft has been flown seasonally with a wing-mounted pod containing an infrared scanner, utilized to provide real-time wildfire geo-location data to various fire-fighting agencies in the western U.S. The multi-agency effort included an extensive process to obtain flight clearance from the FAA to operate under special provisions, given that UAS in general do not fully comply with current airspace regulations (e.g. sense-and-avoid requirements).

  14. A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    PubMed Central

    Corti, Kevin; Gillespie, Alex

    2015-01-01

    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066

  15. Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.

    PubMed

    McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A

    2017-06-28

    To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.

  16. KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1985-01-01

    Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.

  17. Data storage technology: Hardware and software, Appendix B

    NASA Technical Reports Server (NTRS)

    Sable, J. D.

    1972-01-01

    This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.

  18. Three-dimensional virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.

  19. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  20. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  1. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  2. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  3. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  4. Proceedings of the first workshop on Peripheral Machine Interfaces: going beyond traditional surface electromyography

    PubMed Central

    Castellini, Claudio; Artemiadis, Panagiotis; Wininger, Michael; Ajoudani, Arash; Alimusaj, Merkur; Bicchi, Antonio; Caputo, Barbara; Craelius, William; Dosen, Strahinja; Englehart, Kevin; Farina, Dario; Gijsberts, Arjan; Godfrey, Sasha B.; Hargrove, Levi; Ison, Mark; Kuiken, Todd; Marković, Marko; Pilarski, Patrick M.; Rupp, Rüdiger; Scheme, Erik

    2014-01-01

    One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it. PMID:25177292

  5. A restrained-torque-based motion instructor: forearm flexion/extension-driving exoskeleton

    NASA Astrophysics Data System (ADS)

    Nishimura, Takuya; Nomura, Yoshihiko; Sakamoto, Ryota

    2013-01-01

    When learning complicated movements by ourselves, we encounter such problems as a self-rightness. The self-rightness results in a lack of detail and objectivity, and it may cause to miss essences and even twist the essences. Thus, we sometimes fall into the habits of doing inappropriate motions. To solve these problems or to alleviate the problems as could as possible, we have been developed mechanical man-machine human interfaces to support us learning such motions as cultural gestures and sports form. One of the promising interfaces is a wearable exoskeleton mechanical system. As of the first try, we have made a prototype of a 2-link 1-DOF rotational elbow joint interface that is applied for teaching extension-flexion operations with forearms and have found its potential abilities for teaching the initiating and continuing flection motion of the elbow.

  6. A Workshop on the Gathering of Information for Problem Formulation

    DTIC Science & Technology

    1991-06-01

    the Al specialists is to design "artificially intelligent" computer environments that tutor students in much the same way that a human teacher might...tuning the interface betweeen student and machine, and are using a technique of in situ development to tune the system towaid realistic user needs. 141...of transferability to new domains, while the latter suffers from extreme fragility: the inability to cope with any input not strictly conforming with

  7. Man-Machine Interface (MMI) Requirements Definition and Design Guidelines

    DTIC Science & Technology

    1981-02-01

    be provided to interrogate the user to resolve any input ambiguities resulting from hardware limitations; see Smith and Goodwin, 1971 . Reference...Smith, S. L. and Goodwin, N. C’. Alphabetic data v entry via the Touch-Tone pad: A comment. Human Factors, 1971 , 13(2), 189-190. 41 All~ 1.0 General (con...software designer. Reference: Miller, R. B. Response time in man-computer conversational transactions. In Proceedings of the AFIPS kall Joint Computer

  8. Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.

  9. The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction.

    PubMed

    Schermer, Maartje

    2009-12-01

    Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have on our 'symbolic order'-on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

  10. Pixels, people, perception, pet peeves, and possibilities: a look at displays

    NASA Astrophysics Data System (ADS)

    Task, H. Lee

    2007-04-01

    This year marks the 35 th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of experience in the display and display related areas.

  11. Designing Guiding Systems for Brain-Computer Interfaces

    PubMed Central

    Kosmyna, Nataliya; Lécuyer, Anatole

    2017-01-01

    Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400

  12. Model and experiments to optimize co-adaptation in a simplified myoelectric control system.

    PubMed

    Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A

    2018-04-01

    To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.

  13. The Portals 4.0 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less

  14. Human factors in technology replacement: a case study in interface design for a public transport monitoring system.

    PubMed

    Harper, J G; Fuller, R; Sweeney, D; Waldmann, T

    1998-04-01

    This paper describes ergonomic issues raised during a project to provide a replacement real-time bus route control system to a large public transport company. Task and system analyses highlighted several deficiencies in the original system architecture, the human-machine interfaces and the general approach to system management. The eventual live prototype replaced the existing original system for a trial evaluation period of several weeks. During this period a number of studies was conducted with the system users in order to measure any improvements the new system, with its ergonomic features, produced over the old. Importantly, the results confirmed that (a) general responsiveness and service quality were improved, and (b) users were more comfortable with the new design. We conclude with a number of caveats which we believe will be useful to any group addressing technology impact in a large organisation.

  15. Tattoolike Polyaniline Microparticle-Doped Gold Nanowire Patches as Highly Durable Wearable Sensors.

    PubMed

    Gong, Shu; Lai, Daniel T H; Wang, Yan; Yap, Lim Wei; Si, Kae Jye; Shi, Qianqian; Jason, Naveen Noah; Sridhar, Tam; Uddin, Hemayet; Cheng, Wenlong

    2015-09-09

    Wearable and highly sensitive strain sensors are essential components of electronic skin for future biomonitoring and human machine interfaces. Here we report a low-cost yet efficient strategy to dope polyaniline microparticles into gold nanowire (AuNW) films, leading to 10 times enhancement in conductivity and ∼8 times improvement in sensitivity. Simultaneously, tattoolike wearable sensors could be fabricated simply by a direct "draw-on" strategy with a Chinese penbrush. The stretchability of the sensors could be enhanced from 99.7% to 149.6% by designing curved tattoo with different radius of curvatures. We also demonstrated roller coating method to encapusulate AuNWs sensors, exhibiting excellent water resistibility and durability. Because of improved conductivity of our sensors, they can directly interface with existing wireless circuitry, allowing for fabrication of wireless flexion sensors for a human finger-controlled robotic arm system.

  16. [Neurophysiological Foundations and Practical Realizations of the Brain-Machine Interfaces the Technology in Neurological Rehabilitation].

    PubMed

    Kaplan, A Ya

    2016-01-01

    Technology brain-computer interface (BCI) based on the registration and interpretation of EEG has recently become one of the most popular developments in neuroscience and psychophysiology. This is due not only to the intended future use of these technologies in many areas of practical human activity, but also to the fact that IMC--is a completely new paradigm in psychophysiology, allowing test hypotheses about the possibilities of the human brain to the development of skills of interaction with the outside world without the mediation of the motor system, i.e. only with the help of voluntary modulation of EEG generators. This paper examines the theoretical and experimental basis, the current state and prospects of development of training, communicational and assisting complexes based on BCI to control them without muscular effort on the basis of mental commands detected in the EEG of patients with severely impaired speech and motor system.

  17. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    PubMed

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  18. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

    PubMed

    Crouser, R J; Chang, R

    2012-12-01

    Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.

  19. The Muscle Sensor for on-site neuroscience lectures to pave the way for a better understanding of brain-machine-interface research.

    PubMed

    Koizumi, Amane; Nagata, Osamu; Togawa, Morio; Sazi, Toshiyuki

    2014-01-01

    Neuroscience is an expanding field of science to investigate enigmas of brain and human body function. However, the majority of the public have never had the chance to learn the basics of neuroscience and new knowledge from advanced neuroscience research through hands-on experience. Here, we report that we produced the Muscle Sensor, a simplified electromyography, to promote educational understanding in neuroscience. The Muscle Sensor can detect myoelectric potentials which are filtered and processed as 3-V pulse signals to shine a light bulb and emit beep sounds. With this educational tool, we delivered "On-Site Neuroscience Lectures" in Japanese junior-high schools to facilitate hands-on experience of neuroscientific electrophysiology and to connect their text-book knowledge to advanced neuroscience researches. On-site neuroscience lectures with the Muscle Sensor pave the way for a better understanding of the basics of neuroscience and the latest topics such as how brain-machine-interface technology could help patients with disabilities such as spinal cord injuries. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review

    PubMed Central

    Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik

    2017-01-01

    For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122

  1. European public deliberation on brain machine interface technology: five convergence seminars.

    PubMed

    Jebari, Karim; Hansson, Sven-Ove

    2013-09-01

    We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.

  2. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  3. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  4. Advances in data representation for hard/soft information fusion

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey C.; Coughlin, Dan; Hall, David L.; Graham, Jacob L.

    2012-06-01

    Information fusion is becoming increasingly human-centric. While past systems typically relegated humans to the role of analyzing a finished fusion product, current systems are exploring the role of humans as integral elements in a modular and extensible distributed framework where many tasks can be accomplished by either human or machine performers. For example, "participatory sensing" campaigns give humans the role of "soft sensors" by uploading their direct observations or as "soft sensor platforms" by using mobile devices to record human-annotated, GPS-encoded high quality photographs, video, or audio. Additionally, the role of "human-in-the-loop", in which individuals or teams using advanced human computer interface (HCI) tools such as stereoscopic 3D visualization, haptic interfaces, or aural "sonification" interfaces can help to effectively engage the innate human capability to perform pattern matching, anomaly identification, and semantic-based contextual reasoning to interpret an evolving situation. The Pennsylvania State University is participating in a Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office to investigate fusion of hard and soft data in counterinsurgency (COIN) situations. In addition to the importance of this research for Intelligence Preparation of the Battlefield (IPB), many of the same challenges and techniques apply to health and medical informatics, crisis management, crowd-sourced "citizen science", and monitoring environmental concerns. One of the key challenges that we have encountered is the development of data formats, protocols, and methodologies to establish an information architecture and framework for the effective capture, representation, transmission, and storage of the vastly heterogeneous data and accompanying metadata -- including capabilities and characteristics of human observers, uncertainty of human observations, "soft" contextual data, and information pedigree. This paper describes our findings and offers insights into the role of data representation in hard/soft fusion.

  5. Enhanced operator interface for hand-held landmine detector

    NASA Astrophysics Data System (ADS)

    Herman, Herman; McMahill, Jeffrey D.; Kantor, George

    2001-10-01

    As landmines get harder to detect, the complexity of landmine detectors has also been increasing. To increase the probability of detection and decrease the false alarm rate of low metallic landmines, many detectors employ multiple sensing modalities, which include radar and metal detector. Unfortunately, the operator interface for these new detectors stays pretty much the same as for the older detectors. Although the amount of information that the new detectors acquire has increased significantly, the interface has been limited to a simple audio interface. We are currently developing a hybrid audiovisual interface for enhancing the overall performance of the detector. The hybrid audiovisual interface combines the simplicity of the audio output with the rich spatial content of the video display. It is designed to optimally present the output of the detector and also to give the proper feedback to the operator. Instead of presenting all the data to the operator simultaneously, the interface allows the operator to access the information as needed. This capability is critical to avoid information overload, which can significantly reduce the performance of the operator. The audio is used as the primary notification signal, while the video is used for further feedback, discrimination, localization and sensor fusion. The idea is to let the operator gets the feedback that he needs and enable him to look at the data in the most efficient way. We are also looking at a hybrid man-machine detection system which utilizes precise sweeping by the machine and powerful human cognitive ability. In such a hybrid system, the operator is free to concentrate on discriminant task, such as manually fusing the output of the different sensing modalities, instead of worrying about the proper sweep technique. In developing this concept, we have been using the virtual mien lane to validate some of these concepts. We obtained some very encouraging results form our preliminary test. It clearly shows that with the proper feedback, the performance of the operator can be improved significantly in a very short time.

  6. Human-machine interfaces based on EMG and EEG applied to robotic systems.

    PubMed

    Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo

    2008-03-26

    Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.

  7. Modifications to Optimize the AH-1Z Human Machine Interface

    DTIC Science & Technology

    2013-04-18

    accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand...design flaws and guide future design and integration of increased capability. Additionally, employment of material solutions to provide aircrew with the...accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand

  8. A Survey of Research in Supervisory Control and Data Acquisition (SCADA)

    DTIC Science & Technology

    2014-09-01

    distance learning .2 The data acquired may be operationally oriented and used to better run the system, or it could be strategic in nature and used to...Technically the SCADA system is composed of the information technology (IT) that provides the human- machine interface (HMI) and stores and analyzes the data...systems work by learning what normal or benign traffic is and reporting on any abnormal traffic. These systems have the potential to detect zero-day

  9. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems

    DTIC Science & Technology

    2007-09-17

    Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens & Hollands, 2000). In SDT, the participants’ performance is characterized by two...probability, whereas their sensitivity will stay constant (Macmillan & Creelman , 1991; Wickens & Hollands, 2000). If this hypothesis holds, it will...Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study (2001a). Thus, C was used in the analysis HMIs for Trust and

  10. Microstructured graphene arrays for highly sensitive flexible tactile sensors.

    PubMed

    Zhu, Bowen; Niu, Zhiqiang; Wang, Hong; Leow, Wan Ru; Wang, Hua; Li, Yuangang; Zheng, Liyan; Wei, Jun; Huo, Fengwei; Chen, Xiaodong

    2014-09-24

    A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Decoding semantic information from human electrocorticographic (ECoG) signals.

    PubMed

    Wang, Wei; Degenhart, Alan D; Sudre, Gustavo P; Pomerleau, Dean A; Tyler-Kabara, Elizabeth C

    2011-01-01

    This study examined the feasibility of decoding semantic information from human cortical activity. Four human subjects undergoing presurgical brain mapping and seizure foci localization participated in this study. Electrocorticographic (ECoG) signals were recorded while the subjects performed simple language tasks involving semantic information processing, such as a picture naming task where subjects named pictures of objects belonging to different semantic categories. Robust high-gamma band (60-120 Hz) activation was observed at the left inferior frontal gyrus (LIFG) and the posterior portion of the superior temporal gyrus (pSTG) with a temporal sequence corresponding to speech production and perception. Furthermore, Gaussian Naïve Bayes and Support Vector Machine classifiers, two commonly used machine learning algorithms for pattern recognition, were able to predict the semantic category of an object using cortical activity captured by ECoG electrodes covering the frontal, temporal and parietal cortices. These findings have implications for both basic neuroscience research and development of semantic-based brain-computer interface systems (BCI) that can help individuals with severe motor or communication disorders to express their intention and thoughts.

  12. The human role in space. Volume 3: Generalizations on human roles in space

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The human role in space was studied. The role and the degree of direct involvement of humans that will be required in future space missions, was investigated. Valid criteria for allocating functional activities between humans and machines were established. The technology requirements, ecnomics, and benefits of the human presence in space were examined. Factors which affect crew productivity include: internal architecture; crew support; crew activities; LVA systems; IVA/EVA interfaces; and remote systems management. The accomplished work is reported and the data and analyses from which the study results are derived are included. The results provide information and guidelines to enable NASA program managers and decision makers to establish, early in the design process, the most cost effective design approach for future space programs, through the optimal application of unique human skills and capabilities in space.

  13. The portals 4.0.1 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less

  14. Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control

    NASA Astrophysics Data System (ADS)

    Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.

    2012-04-01

    A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.

  15. Volitional enhancement of firing synchrony and oscillation by neuronal operant conditioning: interaction with neurorehabilitation and brain-machine interface

    PubMed Central

    Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu

    2014-01-01

    In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them. PMID:24567704

  16. Volitional enhancement of firing synchrony and oscillation by neuronal operant conditioning: interaction with neurorehabilitation and brain-machine interface.

    PubMed

    Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu

    2014-01-01

    In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them.

  17. Modified automatic teller machine prototype for older adults: a case study of participative approach to inclusive design.

    PubMed

    Chan, Chetwyn C H; Wong, Alex W K; Lee, Tatia M C; Chi, Iris

    2009-03-01

    The goal of this study was to enhance an existing automated teller machine (ATM) human-machine interface in order to accommodate the needs of older adults. Older adults were involved in the design and field test of the modified ATM prototype. The design of the user interface and functionality took the cognitive and physical abilities of older adults into account. The modified ATM system included only "cash withdrawal" and "transfer" functions based on the task demands and needs for services of older adults. One hundred and forty-one older adults (aged 60 or above) participated in the field test by operating modified or existing ATM systems. Those who operated the modified system were found to have significantly higher success rates than those who operated the existing system. The enhancement was most significant among older adults who had lower ATM-related abilities, a lower level of education, and no prior experience of using ATMs. This study demonstrates the usefulness of using a universal design and participatory approach to modify the existing ATM system for use by older adults. However, it also leads to a reduction in functionality of the enhanced system. Future studies should explore ways to develop a universal design ATM system which can satisfy the abilities and needs of all users in the entire population.

  18. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  19. Evaluating the Toxicity of Cigarette Whole Smoke Solutions in an Air-Liquid-Interface Human In Vitro Airway Tissue Model.

    PubMed

    Cao, Xuefei; Muskhelishvili, Levan; Latendresse, John; Richter, Patricia; Heflich, Robert H

    2017-03-01

    Exposure to cigarette smoke causes a multitude of pathological changes leading to tissue damage and disease. Quantifying such changes in highly differentiated in vitro human tissue models may assist in evaluating the toxicity of tobacco products. In this methods development study, well-differentiated human air-liquid-interface (ALI) in vitro airway tissue models were used to assess toxicological endpoints relevant to tobacco smoke exposure. Whole mainstream smoke solutions (WSSs) were prepared from 2 commercial cigarettes (R60 and S60) that differ in smoke constituents when machine-smoked under International Organization for Standardization conditions. The airway tissue models were exposed apically to WSSs 4-h per day for 1-5 days. Cytotoxicity, tissue barrier integrity, oxidative stress, mucin secretion, and matrix metalloproteinase (MMP) excretion were measured. The treatments were not cytotoxic and had marginal effects on tissue barrier properties; however, other endpoints responded in time- and dose-dependent manners, with the R60 resulting in higher levels of response than the S60 for many endpoints. Based on the lowest effect dose, differences in response to the WSSs were observed for mucin induction and MMP secretion. Mitigation of mucin induction by cotreatment of cultures with N-acetylcysteine suggests that oxidative stress contributes to mucus hypersecretion. Overall, these preliminary results suggest that quantifying disease-relevant endpoints using ALI airway models is a potential tool for tobacco product toxicity evaluation. Additional research using tobacco samples generated under smoking machine conditions that more closely approximate human smoking patterns will inform further methods development. Published by Oxford University Press on behalf of the Society of Toxicology 2017. This work is written by US Government employees and is in the public domain in the US.

  20. Effect of Different Movement Speed Modes on Human Action Observation: An EEG Study.

    PubMed

    Luo, Tian-Jian; Lv, Jitu; Chao, Fei; Zhou, Changle

    2018-01-01

    Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.

  1. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.

    PubMed

    Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego

    2017-01-01

    Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  2. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  3. A Human-machine-interface Integrating Low-cost Sensors with a Neuromuscular Electrical Stimulation System for Post-stroke Balance Rehabilitation.

    PubMed

    Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban

    2016-04-12

    A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers.

  4. A Human-machine-interface Integrating Low-cost Sensors with a Neuromuscular Electrical Stimulation System for Post-stroke Balance Rehabilitation

    PubMed Central

    Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban

    2016-01-01

    A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers. PMID:27166666

  5. Remapping residual coordination for controlling assistive devices and recovering motor functions.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias B; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2015-12-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human-machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Robots with a gentle touch: advances in assistive robotics and prosthetics.

    PubMed

    Harwin, W S

    1999-01-01

    As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.

  7. Flexible Parsing.

    DTIC Science & Technology

    1986-06-30

    Machine Studies .. 14. Minton, S. N., Hayes, P. J., and Fain, J. E. Controlling Search in Flexible Parsing. Proc. Ninth Int. Jt. Conf. on Artificial...interaction through the COUSIN command interface", International Journal of Man- Machine Studies , Vol. 19, No. 3, September 1983, pp. 285-305. 8...in a gracefully interacting user interface," "Dynamic strategy selection in flexible parsing," and "Parsing spoken language: a semantic case frame

  8. Problems in modeling man machine control behavior in biodynamic environments

    NASA Technical Reports Server (NTRS)

    Jex, H. R.

    1972-01-01

    Reviewed are some current problems in modeling man-machine control behavior in a biodynamic environment. It is given in two parts: (1) a review of the models which are appropriate for manual control behavior and the added elements necessary to deal with biodynamic interfaces; and (2) a review of some biodynamic interface pilot/vehicle problems which have occurred, been solved, or need to be solved.

  9. Combat Automation for Airborne Weapon Systems: Man/Machine Interface Trends and Technologies (L’Automatisation du Combat Aerien: Tendances et Technologies pour l’Interface Homme/Machine)

    DTIC Science & Technology

    1993-04-01

    Homme /Machine) Aocesion For ; 1 [ NTIS ’ D:i: Ü J-H CRA& l TAB 3...I’utilisateur. - Enfm, utilise avec le bouton droit de la souris, le poten- tiom&tre de temps 6coul6 permet de charger une alterna- tive dans le syst&me...a a a a rn£Q £ OB E o 15 l | I? ^©J&Mß) NATO ^ OTAN 7 RUE ANCELLE • 92200 NEUILLY-SÜR-SEINE DIFFUSION DES PUBLICATIONS FRANCE AGARD

  10. Human factors issues for interstellar spacecraft

    NASA Technical Reports Server (NTRS)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  11. A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom

    NASA Astrophysics Data System (ADS)

    Pan, Lizhi; Yang, Zhen; Zhang, Dingguo

    2015-10-01

    The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.

  12. A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom.

    PubMed

    Pan, Lizhi; Yang, Zhen; Zhang, Dingguo

    2015-10-01

    The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.

  13. Chip breaking system for automated machine tool

    DOEpatents

    Arehart, Theodore A.; Carey, Donald O.

    1987-01-01

    The invention is a rotary selectively directional valve assembly for use in an automated turret lathe for directing a stream of high pressure liquid machining coolant to the interface of a machine tool and workpiece for breaking up ribbon-shaped chips during the formation thereof so as to inhibit scratching or other marring of the machined surfaces by these ribbon-shaped chips. The valve assembly is provided by a manifold arrangement having a plurality of circumferentially spaced apart ports each coupled to a machine tool. The manifold is rotatable with the turret when the turret is positioned for alignment of a machine tool in a machining relationship with the workpiece. The manifold is connected to a non-rotational header having a single passageway therethrough which conveys the high pressure coolant to only the port in the manifold which is in registry with the tool disposed in a working relationship with the workpiece. To position the machine tools the turret is rotated and one of the tools is placed in a material-removing relationship of the workpiece. The passageway in the header and one of the ports in the manifold arrangement are then automatically aligned to supply the machining coolant to the machine tool workpiece interface for breaking up of the chips as well as cooling the tool and workpiece during the machining operation.

  14. Synaptic organic transistors with a vacuum-deposited charge-trapping nanosheet

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Hyun; Sung, Sujin; Yoon, Myung-Han

    2016-09-01

    Organic neuromorphic devices hold great promise for unconventional signal processing and efficient human-machine interfaces. Herein, we propose novel synaptic organic transistors devised to overcome the traditional trade-off between channel conductance and memory performance. A vacuum-processed, nanoscale metallic interlayer provides an ultra-flat surface for a high-mobility molecular film as well as a desirable degree of charge trapping, allowing for low-temperature fabrication of uniform device arrays on plastic. The device architecture is implemented by widely available electronic materials in combination with conventional deposition methods. Therefore, our results are expected to generate broader interests in incorporation of organic electronics into large-area neuromorphic systems, with potential in gate-addressable complex logic circuits and transparent multifunctional interfaces receiving direct optical and cellular stimulation.

  15. Use of parallel computing for analyzing big data in EEG studies of ambiguous perception

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.

    2018-02-01

    Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.

  16. Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends.

    PubMed

    Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Sciaraffa, Nicolina; Colosimo, Alfredo; Babiloni, Fabio

    2017-07-01

    This minireview aims to highlight recent important aspects to consider and evaluate when passive brain-computer interface (pBCI) systems would be developed and used in operational environments, and remarks future directions of their applications. Electroencephalography (EEG) based pBCI has become an important tool for real-time analysis of brain activity since it could potentially provide covertly-without distracting the user from the main task-and objectively-not affected by the subjective judgment of an observer or the user itself-information about the operator cognitive state. Different examples of pBCI applications in operational environments and new adaptive interface solutions have been presented and described. In addition, a general overview regarding the correct use of machine learning techniques (e.g., which algorithm to use, common pitfalls to avoid, etc.) in the pBCI field has been provided. Despite recent innovations on algorithms and neurotechnology, pBCI systems are not completely ready to enter the market yet, mainly due to limitations of the EEG electrodes technology, and algorithms reliability and capability in real settings. High complexity and safety critical systems (e.g., airplanes, ATM interfaces) should adapt their behaviors and functionality accordingly to the user' actual mental state. Thus, technologies (i.e., pBCIs) able to measure in real time the user's mental states would result very useful in such "high risk" environments to enhance human machine interaction, and so increase the overall safety.

  17. Operation of micro and molecular machines: a new concept with its origins in interface science.

    PubMed

    Ariga, Katsuhiko; Ishihara, Shinsuke; Izawa, Hironori; Xia, Hong; Hill, Jonathan P

    2011-03-21

    A landmark accomplishment of nanotechnology would be successful fabrication of ultrasmall machines that can work like tweezers, motors, or even computing devices. Now we must consider how operation of micro- and molecular machines might be implemented for a wide range of applications. If these machines function only under limited conditions and/or require specialized apparatus then they are useless for practical applications. Therefore, it is important to carefully consider the access of functionality of the molecular or nanoscale systems by conventional stimuli at the macroscopic level. In this perspective, we will outline the position of micro- and molecular machines in current science and technology. Most of these machines are operated by light irradiation, application of electrical or magnetic fields, chemical reactions, and thermal fluctuations, which cannot always be applied in remote machine operation. We also propose strategies for molecular machine operation using the most conventional of stimuli, that of macroscopic mechanical force, achieved through mechanical operation of molecular machines located at an air-water interface. The crucial roles of the characteristics of an interfacial environment, i.e. connection between macroscopic dimension and nanoscopic function, and contact of media with different dielectric natures, are also described.

  18. [A cyborg is only human].

    PubMed

    Schermer, Maartje H N

    2013-01-01

    New biomedical technologies make it possible to replace parts of the human body or to substitute its functions. Examples include artificial joints, eye lenses and arterial stents. Newer technologies use electronics and software, for example in brain-computer interfaces such as retinal implants and the exoskeleton MindWalker. Gradually we are creating cyborgs: hybrids of man and machine. This raises the question: are cyborgs still humans? It is argued that they are. First, because employing technology is a typically human characteristic. Second, because in western thought the human mind, and not the body, is considered to be the seat of personhood. However, it has been argued by phenomenological philosophers that the body is more than just an object but is also a subject, important for human identity. From this perspective, we can appreciate that a bionic body does not make one less human, but it does influence the experience of being human.

  19. Toward more versatile and intuitive cortical brain-machine interfaces.

    PubMed

    Andersen, Richard A; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson

    2014-09-22

    Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. 40 CFR 63.464 - Alternative standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (a)(2) of this section. (1) If the cleaning machine has a solvent/air interface, as defined in § 63... cleaning machines 153 New in-line solvent cleaning machines 99 (2) If the cleaning machine is a batch vapor... requirements specified in paragraphs (a)(2)(i) and (a)(2)(ii) of this section. (i) Maintain a log of solvent...

  1. 40 CFR 63.464 - Alternative standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (a)(2) of this section. (1) If the cleaning machine has a solvent/air interface, as defined in § 63... cleaning machines 153 New in-line solvent cleaning machines 99 (2) If the cleaning machine is a batch vapor... requirements specified in paragraphs (a)(2)(i) and (a)(2)(ii) of this section. (i) Maintain a log of solvent...

  2. Artificial intelligence applied to process signal analysis

    NASA Technical Reports Server (NTRS)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  3. A chronic generalized bi-directional brain-machine interface.

    PubMed

    Rouse, A G; Stanslaski, S R; Cong, P; Jensen, R M; Afshar, P; Ullestad, D; Gupta, R; Molnar, G F; Moran, D W; Denison, T J

    2011-06-01

    A bi-directional neural interface (NI) system was designed and prototyped by incorporating a novel neural recording and processing subsystem into a commercial neural stimulator architecture. The NI system prototype leverages the system infrastructure from an existing neurostimulator to ensure reliable operation in a chronic implantation environment. In addition to providing predicate therapy capabilities, the device adds key elements to facilitate chronic research, such as four channels of electrocortigram/local field potential amplification and spectral analysis, a three-axis accelerometer, algorithm processing, event-based data logging, and wireless telemetry for data uploads and algorithm/configuration updates. The custom-integrated micropower sensor and interface circuits facilitate extended operation in a power-limited device. The prototype underwent significant verification testing to ensure reliability, and meets the requirements for a class CF instrument per IEC-60601 protocols. The ability of the device system to process and aid in classifying brain states was preclinically validated using an in vivo non-human primate model for brain control of a computer cursor (i.e. brain-machine interface or BMI). The primate BMI model was chosen for its ability to quantitatively measure signal decoding performance from brain activity that is similar in both amplitude and spectral content to other biomarkers used to detect disease states (e.g. Parkinson's disease). A key goal of this research prototype is to help broaden the clinical scope and acceptance of NI techniques, particularly real-time brain state detection. These techniques have the potential to be generalized beyond motor prosthesis, and are being explored for unmet needs in other neurological conditions such as movement disorders, stroke and epilepsy.

  4. Neurofeedback Training for BCI Control

    NASA Astrophysics Data System (ADS)

    Neuper, Christa; Pfurtscheller, Gert

    Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].

  5. Application of the SCADA system in wastewater treatment plants.

    PubMed

    Dieu, B

    2001-01-01

    The implementation of the SCADA system has a positive impact on the operations, maintenance, process improvement and savings for the City of Houston's Wastewater Operations branch. This paper will discuss the system's evolvement, the external/internal architecture, and the human-machine-interface graphical design. Finally, it will demonstrate the system's successes in monitoring the City's sewage and sludge collection/distribution systems, wet-weather facilities and wastewater treatment plants, complying with the USEPA requirements on the discharge, and effectively reducing the operations and maintenance costs.

  6. Active tactile exploration using a brain-machine-brain interface.

    PubMed

    O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L

    2011-10-05

    Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.

  7. Applications of airborne ultrasound in human-computer interaction.

    PubMed

    Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre

    2014-09-01

    Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.

  8. Physics-based and human-derived information fusion for analysts

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael

    2017-05-01

    Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.

  9. Vibrotactile display for mobile applications based on dielectric elastomer stack actuators

    NASA Astrophysics Data System (ADS)

    Matysek, Marc; Lotz, Peter; Flittner, Klaus; Schlaak, Helmut F.

    2010-04-01

    Dielectric elastomer stack actuators (DESA) offer the possibility to build actuator arrays at very high density. The driving voltage can be defined by the film thickness, ranging from 80 μm down to 5 μm and driving field strength of 30 V/μm. In this paper we present the development of a vibrotactile display based on multilayer technology. The display is used to present several operating conditions of a machine in form of haptic information to a human finger. As an example the design of a mp3-player interface is introduced. To build up an intuitive and user friendly interface several aspects of human haptic perception have to be considered. Using the results of preliminary user tests the interface is designed and an appropriate actuator layout is derived. Controlling these actuators is important because there are many possibilities to present different information, e.g. by varying the driving parameters. A built demonstrator is used to verify the concept: a high recognition rate of more than 90% validates the concept. A characterization of mechanical and electrical parameters proofs the suitability of dielectric elastomer stack actuators for the use in mobile applications.

  10. Mold Heating and Cooling Pump Package Operator Interface Controls Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josh A. Salmond

    2009-08-07

    The modernization of the Mold Heating and Cooling Pump Package Operator Interface (MHC PP OI) consisted of upgrading the antiquated single board computer with a proprietary operating system to off-the-shelf hardware and off-the-shelf software with customizable software options. The pump package is the machine interface between a central heating and cooling system that pumps heat transfer fluid through an injection or compression mold base on a local plastic molding machine. The operator interface provides the intelligent means of controlling this pumping process. Strict temperature control of a mold allows the production of high quality parts with tight tolerances and lowmore » residual stresses. The products fabricated are used on multiple programs.« less

  11. A Wireless 32-Channel Implantable Bidirectional Brain Machine Interface

    PubMed Central

    Su, Yi; Routhu, Sudhamayee; Moon, Kee S.; Lee, Sung Q.; Youm, WooSub; Ozturk, Yusuf

    2016-01-01

    All neural information systems (NIS) rely on sensing neural activity to supply commands and control signals for computers, machines and a variety of prosthetic devices. Invasive systems achieve a high signal-to-noise ratio (SNR) by eliminating the volume conduction problems caused by tissue and bone. An implantable brain machine interface (BMI) using intracortical electrodes provides excellent detection of a broad range of frequency oscillatory activities through the placement of a sensor in direct contact with cortex. This paper introduces a compact-sized implantable wireless 32-channel bidirectional brain machine interface (BBMI) to be used with freely-moving primates. The system is designed to monitor brain sensorimotor rhythms and present current stimuli with a configurable duration, frequency and amplitude in real time to the brain based on the brain activity report. The battery is charged via a novel ultrasonic wireless power delivery module developed for efficient delivery of power into a deeply-implanted system. The system was successfully tested through bench tests and in vivo tests on a behaving primate to record the local field potential (LFP) oscillation and stimulate the target area at the same time. PMID:27669264

  12. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  13. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  14. Mobile Tactical HF/VHF/EW System for Ground Forces

    DTIC Science & Technology

    1989-09-01

    presen- tation of what I have learned . I would like to thank my advisor, Professor Robert Partelow, and co-advisor, Commander James R. Powell, for the...analyze newly developed systems to determine how the man- machine interfaces of such systems can best be designed for optimal use by the operators. B...terminals and other controls. If factors like luminance ratio, reflectance, glare illuminance are allowed for good man- machine interface then an effective

  15. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  16. Finite element analysis when orthogonal cutting of hybrid composite CFRP/Ti

    NASA Astrophysics Data System (ADS)

    Xu, Jinyang; El Mansori, Mohamed

    2015-07-01

    Hybrid composite, especially CFRP/Ti stack, is usually considered as an innovative structural configuration for manufacturing the key load-bearing components in modern aerospace industry. This paper originally proposed an FE model to simulate the total chip formation process dominated the hybrid cutting operation. The hybrid composite model was established based on three physical constituents, i.e., Ti constituent, interface and CFRP constituent. Different constitutive models and damage criteria were introduced to replicate the interrelated cutting behaviour of the stack material. The CFRP/Ti interface was modelled as a third phase through the concept of cohesive zone (CZ). Particular attention was made on the comparative studies of the influence of different cutting-sequence strategies on the machining responses induced in hybrid stack cutting. The numerical results emphasized the pivotal role of cutting-sequence strategy on the various machining induced responses including cutting-force generation, machined surface quality and induced interface damage.

  17. Evolution of brain-computer interfaces: going beyond classic motor physiology

    PubMed Central

    Leuthardt, Eric C.; Schalk, Gerwin; Roland, Jarod; Rouse, Adam; Moran, Daniel W.

    2010-01-01

    The notion that a computer can decode brain signals to infer the intentions of a human and then enact those intentions directly through a machine is becoming a realistic technical possibility. These types of devices are known as brain-computer interfaces (BCIs). The evolution of these neuroprosthetic technologies could have significant implications for patients with motor disabilities by enhancing their ability to interact and communicate with their environment. The cortical physiology most investigated and used for device control has been brain signals from the primary motor cortex. To date, this classic motor physiology has been an effective substrate for demonstrating the potential efficacy of BCI-based control. However, emerging research now stands to further enhance our understanding of the cortical physiology underpinning human intent and provide further signals for more complex brain-derived control. In this review, the authors report the current status of BCIs and detail the emerging research trends that stand to augment clinical applications in the future. PMID:19569892

  18. Tactile objects based on an amplitude disturbed diffraction pattern method

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Nikolovski, Jean-Pierre; Mechbal, Nazih; Hafez, Moustapha; Vergé, Michel

    2009-12-01

    Tactile sensing is becoming widely used in human-computer interfaces. Recent advances in acoustic approaches demonstrated the possibilities to transform ordinary solid objects into interactive interfaces. This letter proposes a static finger contact localization process using an amplitude disturbed diffraction pattern method. The localization method is based on the following physical phenomenon: a finger contact modifies the energy distribution of acoustic wave in a solid; these variations depend on the wave frequency and the contact position. The presented method first consists of exciting the object with an acoustic signal with plural frequency components. In a second step, a measured acoustic signal is compared with prerecorded values to deduce the contact position. This position is then used for human-machine interaction (e.g., finger tracking on computer screen). The selection of excitation signals is discussed and a frequency choice criterion based on contrast value is proposed. Tests on a sandwich plate (liquid crystal display screen) prove the simplicity and easiness to apply the process in various solids.

  19. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  20. Compatibility Problems of Network Interfacing.

    ERIC Educational Resources Information Center

    Stevens, Mary Elizabeth

    From the standpoint of information network technology there is a necessary emphasis upon compatibility requirements which, in turn, will be met at least in part by various techniques of achieving convertibility --- between machine and machine, between man and machine, and between man and man. It may be hoped that improved compatibilities between…

  1. Human Factors in Cabin Accident Investigations

    NASA Technical Reports Server (NTRS)

    Chute, Rebecca D.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Human factors has become an integral part of the accident investigation protocol. However, much of the investigative process remains focussed on the flight deck, airframe, and power plant systems. As a consequence, little data has been collected regarding the human factors issues within and involving the cabin during an accident. Therefore, the possibility exists that contributing factors that lie within that domain may be overlooked. The FAA Office of Accident Investigation is sponsoring a two-day workshop on cabin safety accident investigation. This course, within the workshop, will be of two hours duration and will explore relevant areas of human factors research. Specifically, the three areas of discussion are: Information transfer and resource management, fatigue and other physical stressors, and the human/machine interface. Integration of these areas will be accomplished by providing a suggested checklist of specific cabin-related human factors questions for investigators to probe following an accident.

  2. Quadcopter control in three-dimensional space using a noninvasive motor imagery based brain-computer interface

    PubMed Central

    LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin

    2013-01-01

    Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712

  3. Identifying well-formed biomedical phrases in MEDLINE® text.

    PubMed

    Kim, Won; Yeganova, Lana; Comeau, Donald C; Wilbur, W John

    2012-12-01

    In the modern world people frequently interact with retrieval systems to satisfy their information needs. Humanly understandable well-formed phrases represent a crucial interface between humans and the web, and the ability to index and search with such phrases is beneficial for human-web interactions. In this paper we consider the problem of identifying humanly understandable, well formed, and high quality biomedical phrases in MEDLINE documents. The main approaches used previously for detecting such phrases are syntactic, statistical, and a hybrid approach combining these two. In this paper we propose a supervised learning approach for identifying high quality phrases. First we obtain a set of known well-formed useful phrases from an existing source and label these phrases as positive. We then extract from MEDLINE a large set of multiword strings that do not contain stop words or punctuation. We believe this unlabeled set contains many well-formed phrases. Our goal is to identify these additional high quality phrases. We examine various feature combinations and several machine learning strategies designed to solve this problem. A proper choice of machine learning methods and features identifies in the large collection strings that are likely to be high quality phrases. We evaluate our approach by making human judgments on multiword strings extracted from MEDLINE using our methods. We find that over 85% of such extracted phrase candidates are humanly judged to be of high quality. Published by Elsevier Inc.

  4. Human-computer interface for the study of information fusion concepts in situation analysis and command decision support systems

    NASA Astrophysics Data System (ADS)

    Roy, Jean; Breton, Richard; Paradis, Stephane

    2001-08-01

    Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.

  5. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    PubMed

    Larson, Eric; Terry, Howard P; Canevari, Margaux M; Stepp, Cara E

    2013-01-01

    Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.

  6. Effect of the crown design and interface lute parameters on the stress-state of a machined crown-tooth system: a finite element analysis.

    PubMed

    Shahrbaf, Shirin; vanNoort, Richard; Mirzakouchaki, Behnam; Ghassemieh, Elaheh; Martin, Nicolas

    2013-08-01

    The effect of preparation design and the physical properties of the interface lute on the restored machined ceramic crown-tooth complex are poorly understood. The aim of this work was to determine, by means of three-dimensional finite element analysis (3D FEA) the effect of the tooth preparation design and the elastic modulus of the cement on the stress state of the cemented machined ceramic crown-tooth complex. The three-dimensional structure of human premolar teeth, restored with adhesively cemented machined ceramic crowns, was digitized with a micro-CT scanner. An accurate, high resolution, digital replica model of a restored tooth was created. Two preparation designs, with different occlusal morphologies, were modeled with cements of 3 different elastic moduli. Interactive medical image processing software (mimics and professional CAD modeling software) was used to create sophisticated digital models that included the supporting structures; periodontal ligament and alveolar bone. The generated models were imported into an FEA software program (hypermesh version 10.0, Altair Engineering Inc.) with all degrees of freedom constrained at the outer surface of the supporting cortical bone of the crown-tooth complex. Five different elastic moduli values were given to the adhesive cement interface 1.8GPa, 4GPa, 8GPa, 18.3GPa and 40GPa; the four lower values are representative of currently used cementing lutes and 40GPa is set as an extreme high value. The stress distribution under simulated applied loads was determined. The preparation design demonstrated an effect on the stress state of the restored tooth system. The cement elastic modulus affected the stress state in the cement and dentin structures but not in the crown, the pulp, the periodontal ligament or the cancellous and cortical bone. The results of this study suggest that both the choice of the preparation design and the cement elastic modulus can affect the stress state within the restored crown-tooth complex. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  7. Model and experiments to optimize co-adaptation in a simplified myoelectric control system

    NASA Astrophysics Data System (ADS)

    Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P. Y.; de Rugy, A.

    2018-04-01

    Objective. To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. Approach. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. Results. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. Significance. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.

  8. Skills based evaluation of alternative input methods to command a semi-autonomous electric wheelchair.

    PubMed

    Rojas, Mario; Ponce, Pedro; Molina, Arturo

    2016-08-01

    This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.

  9. TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues

    NASA Astrophysics Data System (ADS)

    Cohen, Ethan D.

    2007-06-01

    The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.

  10. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  11. Gesture-Controlled Interfaces for Self-Service Machines

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J.; Beach, Glenn

    2006-01-01

    Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.

  12. Learning algorithms for human-machine interfaces.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2009-05-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.

  13. GAPscreener: an automatic tool for screening human genetic association literature in PubMed using the support vector machine technique.

    PubMed

    Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta

    2008-04-22

    Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge.

  14. Operability of Space Station Freedom's meteoroid/debris protection system

    NASA Technical Reports Server (NTRS)

    Kahl, Maggie S.; Stokes, Jack W.

    1992-01-01

    The design of Space Station Freedom's external structure must not only protect the spacecraft from the hazardous environment, but also must be compatible with the extra vehicular activity system for assembly and maintenance. The external procedures for module support are utility connections, external orbital replaceable unit changeout, and maintenance of the meteoroid/debris shields and multilayer insulation. All of these interfaces require proper man-machine engineering to be compatible with the extra vehicular activity and manipulator systems. This paper discusses design solutions, including those provided for human interface, to the Space Station Freedom meteoroid/debris protection system. The system advantages and current access capabilities are illustrated through analysis of its configuration over the Space Station Freedom resource nodes and common modules, with emphasis on the cylindrical sections and endcones.

  15. Synaptic organic transistors with a vacuum-deposited charge-trapping nanosheet

    PubMed Central

    Kim, Chang-Hyun; Sung, Sujin; Yoon, Myung-Han

    2016-01-01

    Organic neuromorphic devices hold great promise for unconventional signal processing and efficient human-machine interfaces. Herein, we propose novel synaptic organic transistors devised to overcome the traditional trade-off between channel conductance and memory performance. A vacuum-processed, nanoscale metallic interlayer provides an ultra-flat surface for a high-mobility molecular film as well as a desirable degree of charge trapping, allowing for low-temperature fabrication of uniform device arrays on plastic. The device architecture is implemented by widely available electronic materials in combination with conventional deposition methods. Therefore, our results are expected to generate broader interests in incorporation of organic electronics into large-area neuromorphic systems, with potential in gate-addressable complex logic circuits and transparent multifunctional interfaces receiving direct optical and cellular stimulation. PMID:27645425

  16. Re-Design and Beat Testing of the Man-Machine Integration Design and Analysis System: MIDAS

    NASA Technical Reports Server (NTRS)

    Shively, R. Jay; Rutkowski, Michael (Technical Monitor)

    1999-01-01

    The Man-machine Design and Analysis System (MIDAS) is a human factors design and analysis system that combines human cognitive models with 3D CAD models and rapid prototyping and simulation techniques. MIDAS allows designers to ask 'what if' types of questions early in concept exploration and development prior to actual hardware development. The system outputs predictions of operator workload, situational awareness and system performance as well as graphical visualization of the cockpit designs interacting with models of the human in a mission scenario. Recently, MIDAS was re-designed to enhance functionality and usability. The goals driving the redesign include more efficient processing, GUI interface, advances in the memory structures, implementation of external vision models and audition. These changes were detailed in an earlier paper. Two Beta test sites with diverse applications have been chosen. One Beta test site is investigating the development of a new airframe and its interaction with the air traffic management system. The second Beta test effort will investigate 3D auditory cueing in conjunction with traditional visual cueing strategies including panel-mounted and heads-up displays. The progress and lessons learned on each of these projects will be discussed.

  17. Decoding the individual finger movements from single-trial functional magnetic resonance imaging recordings of human brain activity.

    PubMed

    Shen, Guohua; Zhang, Jing; Wang, Mengxing; Lei, Du; Yang, Guang; Zhang, Shanmin; Du, Xiaoxia

    2014-06-01

    Multivariate pattern classification analysis (MVPA) has been applied to functional magnetic resonance imaging (fMRI) data to decode brain states from spatially distributed activation patterns. Decoding upper limb movements from non-invasively recorded human brain activation is crucial for implementing a brain-machine interface that directly harnesses an individual's thoughts to control external devices or computers. The aim of this study was to decode the individual finger movements from fMRI single-trial data. Thirteen healthy human subjects participated in a visually cued delayed finger movement task, and only one slight button press was performed in each trial. Using MVPA, the decoding accuracy (DA) was computed separately for the different motor-related regions of interest. For the construction of feature vectors, the feature vectors from two successive volumes in the image series for a trial were concatenated. With these spatial-temporal feature vectors, we obtained a 63.1% average DA (84.7% for the best subject) for the contralateral primary somatosensory cortex and a 46.0% average DA (71.0% for the best subject) for the contralateral primary motor cortex; both of these values were significantly above the chance level (20%). In addition, we implemented searchlight MVPA to search for informative regions in an unbiased manner across the whole brain. Furthermore, by applying searchlight MVPA to each volume of a trial, we visually demonstrated the information for decoding, both spatially and temporally. The results suggest that the non-invasive fMRI technique may provide informative features for decoding individual finger movements and the potential of developing an fMRI-based brain-machine interface for finger movement. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Conversational sensing

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Gwilliams, Chris; Parizas, Christos; Pizzocaro, Diego; Bakdash, Jonathan Z.; Braines, Dave

    2014-05-01

    Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it pos- sible to represent information fusion and situational awareness for Intelligence, Surveillance and Reconnaissance (ISR) activities as a conversational process among actors at or near the tactical edges of a network. Motivated by use cases in the domain of Company Intelligence Support Team (CoIST) tasks, this paper presents an approach to information collection, fusion and sense-making based on the use of natural language (NL) and controlled nat- ural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational protocol to facilitate a ow of collaborative messages from NL to CNL and back again in support of interactions such as: turning eyewitness reports from human observers into actionable information (from both soldier and civilian sources); fusing information from humans and physical sensors (with associated quality metadata); and assisting human analysts to make the best use of available sensing assets in an area of interest (governed by man- agement and security policies). CNL is used as a common formal knowledge representation for both machine and human agents to support reasoning, semantic information fusion and generation of rationale for inferences, in ways that remain transparent to human users. Examples are provided of various alternative styles for user feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a prototype conversational agent is able to gather usable CNL information from untrained human subjects.

  19. Who Needs to Fit In? Who Gets to Stand Out? Communication Technologies Including Brain-Machine Interfaces Revealed from the Perspectives of Special Education School Teachers through an Ableism Lens

    ERIC Educational Resources Information Center

    Diep, Lucy; Wolbring, Gregor

    2013-01-01

    Some new and envisioned technologies such as brain machine interfaces (BMI) that are being developed initially for people with disabilities, but whose use can also be expanded to the general public have the potential to change body ability expectations of disabled and non-disabled people beyond the species-typical. The ways in which this dynamic…

  20. Body-Machine Interfaces after Spinal Cord Injury: Rehabilitation and Brain Plasticity.

    PubMed

    Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B; Wang, Xue; Parrish, Todd; Mussa-Ivaldi, Ferdinando A

    2016-12-19

    The purpose of this study was to identify rehabilitative effects and changes in white matter microstructure in people with high-level spinal cord injury following bilateral upper-extremity motor skill training. Five subjects with high-level (C5-C6) spinal cord injury (SCI) performed five visuo-spatial motor training tasks over 12 sessions (2-3 sessions per week). Subjects controlled a two-dimensional cursor with bilateral simultaneous movements of the shoulders using a non-invasive inertial measurement unit-based body-machine interface. Subjects' upper-body ability was evaluated before the start, in the middle and a day after the completion of training. MR imaging data were acquired before the start and within two days of the completion of training. Subjects learned to use upper-body movements that survived the injury to control the body-machine interface and improved their performance with practice. Motor training increased Manual Muscle Test scores and the isometric force of subjects' shoulders and upper arms. Moreover, motor training increased fractional anisotropy (FA) values in the cingulum of the left hemisphere by 6.02% on average, indicating localized white matter microstructure changes induced by activity-dependent modulation of axon diameter, myelin thickness or axon number. This body-machine interface may serve as a platform to develop a new generation of assistive-rehabilitative devices that promote the use of, and that re-strengthen, the motor and sensory functions that survived the injury.

  1. A Cognitive Systems Engineering Approach to Developing HMI Requirements for New Technologies

    NASA Technical Reports Server (NTRS)

    Fern, Lisa Carolynn

    2016-01-01

    This document examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will deployed into complex systems. A key question for new technologies, is how work will be accomplished by the human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by the designers. The human machine interface (HMI) which is intended to facilitate human-machine interaction and cooperation, however, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture, can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the expected performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements a detect and avoid system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned from a recent research effort in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to the complete absence of different approaches to human-automation cooperation. For example, all of the prototype technologies that were evaluated in the research program assumed a human-automation architecture that relied on serial processing from the automation to the human. While this type of human-automation architecture is typical across many different technologies and in many different domains, it ignores different architectures where humans and automation work in parallel. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed.

  2. Towards Intelligent Environments: An Augmented Reality–Brain–Machine Interface Operated with a See-Through Head-Mount Display

    PubMed Central

    Takano, Kouji; Hata, Naoki; Kansaku, Kenji

    2011-01-01

    The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307

  3. Proceedings of the Annual Seminar (First), ’The Art of Communications Interfaces’, Held at Fort Monmouth, New Jersey on 22 April 1976,

    DTIC Science & Technology

    Both the oldest and the newest problem areas in communications electronics interfaces are discussed in conjunction with the currently critical...digital communication system evolution. The oldest interface problem, still the most essential is the man machine communications interfaces. The newest is

  4. Micro-patterned graphene-based sensing skins for human physiological monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik

    2018-03-01

    Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.

  5. Graphene-Based Three-Dimensional Capacitive Touch Sensor for Wearable Electronics.

    PubMed

    Kang, Minpyo; Kim, Jejung; Jang, Bongkyun; Chae, Youngcheol; Kim, Jae-Hyun; Ahn, Jong-Hyun

    2017-08-22

    The development of input device technology in a conformal and stretchable format is important for the advancement of various wearable electronics. Herein, we report a capacitive touch sensor with good sensing capabilities in both contact and noncontact modes, enabled by the use of graphene and a thin device geometry. This device can be integrated with highly deformable areas of the human body, such as the forearms and palms. This touch sensor detects multiple touch signals in acute recordings and recognizes the distance and shape of the approaching objects before direct contact is made. This technology offers a convenient and immersive human-machine interface and additional potential utility as a multifunctional sensor for emerging wearable electronics and robotics.

  6. RAPID and HTML5's potential

    NASA Technical Reports Server (NTRS)

    Torosyan, David

    2012-01-01

    Just as important as the engineering that goes into building a robot is the method of interaction, or how human users will use the machine. As part of the Human-System Interactions group (Conductor) at JPL, I explored using a web interface to interact with ATHLETE, a prototype lunar rover. I investigated the usefulness of HTML 5 and Javascript as a telemetry viewer as well as the feasibility of having a rover communicate with a web server. To test my ideas I built a mobile-compatible website and designed primarily for an Android tablet. The website took input from ATHLETE engineers, and upon its completion I conducted a user test to assess its effectiveness.

  7. Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool

    NASA Astrophysics Data System (ADS)

    Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong

    2016-06-01

    The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.

  8. Human Engineering Operations and Habitability Assessment: A Process for Advanced Life Support Ground Facility Testbeds

    NASA Technical Reports Server (NTRS)

    Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)

    1999-01-01

    Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.

  9. Orders on file but no labs drawn: investigation of machine and human errors caused by an interface idiosyncrasy.

    PubMed

    Schreiber, Richard; Sittig, Dean F; Ash, Joan; Wright, Adam

    2017-09-01

    In this report, we describe 2 instances in which expert use of an electronic health record (EHR) system interfaced to an external clinical laboratory information system led to unintended consequences wherein 2 patients failed to have laboratory tests drawn in a timely manner. In both events, user actions combined with the lack of an acknowledgment message describing the order cancellation from the external clinical system were the root causes. In 1 case, rapid, near-simultaneous order entry was the culprit; in the second, astute order management by a clinician, unaware of the lack of proper 2-way interface messaging from the external clinical system, led to the confusion. Although testing had shown that the laboratory system would cancel duplicate laboratory orders, it was thought that duplicate alerting in the new order entry system would prevent such events. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  11. Qualitative CFD for Rapid Learning in Industrial and Academic Applications

    NASA Astrophysics Data System (ADS)

    Variano, Evan

    2010-11-01

    We present a set of tools that allow CFD to be used at an early stage in the design process. Users can rapidly explore the qualitative aspects of fluid flow using real-time simulations that react immediately to design changes. This can guide the design process by fostering an intuitive understanding of fluid dynamics at the prototyping stage. We use an extremely stable Navier-Stokes solver that is available commercially (and free to academic users) plus a custom user interface. The code is designed for the animation and gaming industry, and we exploit the powerful graphical display capabilities to develop a unique human-machine interface. This interface allows the user to efficiently explore the flow in 3D + real time, fostering an intuitive understanding of steady and unsteady flow patterns. There are obvious extensions to use in an academic setting. The trade-offs between accuracy and speed will be discussed in the context of CFD's role in design and education.

  12. Department of Defense Human Factors Engineering Technical Advisory Group Minutes of the Meeting, (15th), Held at San Diego, California, on 5-7 November 1985

    DTIC Science & Technology

    1985-11-01

    the group to be alert to changes in goals, noting that if the model is not sensitive to goal changes , it will lack validity. Mr. Hartzell announced...This increased emphasis on the soldier-machine interface has not been a sudden change . Instead it has been a gradual one coincident with and...point alone in affecting both design changes and operational doctrine for the system. Analysis of these data should first compare achieved

  13. The JPL telerobot operator control station. Part 2: Software

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Landell, B. Patrick; Oxenberg, Sheldon; Morimoto, Carl

    1989-01-01

    The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The software design of the operator control system is discussed.

  14. Effect of vibrotactile feedback on an EMG-based proportional cursor control system.

    PubMed

    Li, Shunchong; Chen, Xingyu; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang

    2013-01-01

    Surface electromyography (sEMG) has been introduced into the bio-mechatronics systems, however, most of them are lack of the sensory feedback. In this paper, the effect of vibrotactile feedback for a myoelectric cursor control system is investigated quantitatively. Simultaneous and proportional control signals are extracted from EMG using a muscle synergy model. Different types of feedback including vibrotactile feedback and visual feedback are added, assessed and compared with each other. The results show that vibrotactile feedback is capable of improving the performance of EMG-based human machine interface.

  15. Human Factors Design Guidelines for the Army Tactical Command and Control System (ATTCS) Soldier-Machine Interface. Version 2.0

    DTIC Science & Technology

    1992-05-01

    especially true for friend-enemy or danger-safe designations. Dots, dashes, shapes, and video effects are recommended. Care must be taken to avoid visual...MAY 92 10.3 Screen Design - Format 10.3.1.4 Use of Contrasting Features Use contrasting features such as inverse video and color to call attention to...captions. Do not use reverse video or highlighting for labels. 13.2.3.2 Formatting For single fields, locate the caption to the left of the entry fields

  16. A Machine Learning and Optimization Toolkit for the Swarm

    DTIC Science & Technology

    2014-11-17

    Machine   Learning  and  Op0miza0on   Toolkit  for  the  Swarm   Ilge  Akkaya,  Shuhei  Emoto...3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE A Machine Learning and Optimization Toolkit for the Swarm 5a. CONTRACT NUMBER... machine   learning   methodologies  by  providing  the  right  interfaces  between   machine   learning  tools  and

  17. Object Management Group object transaction service based on an X/Open and International Organization for Standardization open systems interconnection transaction processing kernel

    NASA Astrophysics Data System (ADS)

    Liang, J.; Sédillot, S.; Traverson, B.

    1997-09-01

    This paper addresses federation of a transactional object standard - Object Management Group (OMG) object transaction service (OTS) - with the X/Open distributed transaction processing (DTP) model and International Organization for Standardization (ISO) open systems interconnection (OSI) transaction processing (TP) communication protocol. The two-phase commit propagation rules within a distributed transaction tree are similar in the X/Open, ISO and OMG models. Building an OTS on an OSI TP protocol machine is possible because the two specifications are somewhat complementary. OTS defines a set of external interfaces without specific internal protocol machine, while OSI TP specifies an internal protocol machine without any application programming interface. Given these observations, and having already implemented an X/Open two-phase commit transaction toolkit based on an OSI TP protocol machine, we analyse the feasibility of using this implementation as a transaction service provider for OMG interfaces. Based on the favourable result of this feasibility study, we are implementing an OTS compliant system, which, by initiating the extensibility and openness strengths of OSI TP, is able to provide interoperability between X/Open DTP and OMG OTS models.

  18. AIMSsim Version 2.3.4 - User Manual

    DTIC Science & Technology

    2008-01-01

    sera en mesure d’utiliser le système efficacement et moyennant une formation minimale, un prototype d’interface humain -machine (IHM) a été développé...d’utiliser l’ensemble de capteurs efficacement et moyennant une formation minimale, un prototype d’interface humain -machine (IHM) a été développé pour...recherche AIMSsim offrent à l’expérimentateur un niveau de simulation assez détaillé pour mener des analyses du rendement humain , qui fournissent à

  19. Image understanding and the man-machine interface II; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Technical Reports Server (NTRS)

    Barrett, Eamon B. (Editor); Pearson, James J. (Editor)

    1989-01-01

    Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.

  20. A Natural Language Interface to Databases

    NASA Technical Reports Server (NTRS)

    Ford, D. R.

    1990-01-01

    The development of a Natural Language Interface (NLI) is presented which is semantic-based and uses Conceptual Dependency representation. The system was developed using Lisp and currently runs on a Symbolics Lisp machine.

  1. A force-controllable macro-micro manipulator and its application to medical robots

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Uecker, Darrin R.; Wang, Yulun

    1994-01-01

    This paper describes an 8-degrees-of-freedom macro-micro robot. This robot is capable of performing tasks that require accurate force control, such as polishing, finishing, grinding, deburring, and cleaning. The design of the macro-micro mechanism, the control algorithms, and the hardware/software implementation of the algorithms are described in this paper. Initial experimental results are reported. In addition, this paper includes a discussion of medical surgery and the role that force control may play. We introduce a new class of robotic systems collectively called Robotic Enhancement Technology (RET). RET systems introduce the combination of robotic manipulation with human control to perform manipulation tasks beyond the individual capability of either human or machine. The RET class of robotic systems offers new challenges in mechanism design, control-law development, and man/machine interface design. We believe force-controllable mechanisms such as the macro-micro structure we have developed are a necessary part of RET. Work in progress in the area of RET systems and their application to minimally invasive surgery is presented, along with future research directions.

  2. Chips: A Tool for Developing Software Interfaces Interactively.

    ERIC Educational Resources Information Center

    Cunningham, Robert E.; And Others

    This report provides a detailed description of Chips, an interactive tool for developing software employing graphical/computer interfaces on Xerox Lisp machines. It is noted that Chips, which is implemented as a collection of customizable classes, provides the programmer with a rich graphical interface for the creation of rich graphical…

  3. Emergent coordination underlying learning to reach to grasp with a brain-machine interface.

    PubMed

    Vaidya, Mukta; Balasubramanian, Karthikeyan; Southerland, Joshua; Badreldin, Islam; Eleryan, Ahmed; Shattuck, Kelsey; Gururangan, Suchin; Slutzky, Marc; Osborne, Leslie; Fagg, Andrew; Oweiss, Karim; Hatsopoulos, Nicholas G

    2018-04-01

    The development of coordinated reach-to-grasp movement has been well studied in infants and children. However, the role of motor cortex during this development is unclear because it is difficult to study in humans. We took the approach of using a brain-machine interface (BMI) paradigm in rhesus macaques with prior therapeutic amputations to examine the emergence of novel, coordinated reach to grasp. Previous research has shown that after amputation, the cortical area previously involved in the control of the lost limb undergoes reorganization, but prior BMI work has largely relied on finding neurons that already encode specific movement-related information. In this study, we taught macaques to cortically control a robotic arm and hand through operant conditioning, using neurons that were not explicitly reach or grasp related. Over the course of training, stereotypical patterns emerged and stabilized in the cross-covariance between the reaching and grasping velocity profiles, between pairs of neurons involved in controlling reach and grasp, and to a comparable, but lesser, extent between other stable neurons in the network. In fact, we found evidence of this structured coordination between pairs composed of all combinations of neurons decoding reach or grasp and other stable neurons in the network. The degree of and participation in coordination was highly correlated across all pair types. Our approach provides a unique model for studying the development of novel, coordinated reach-to-grasp movement at the behavioral and cortical levels. NEW & NOTEWORTHY Given that motor cortex undergoes reorganization after amputation, our work focuses on training nonhuman primates with chronic amputations to use neurons that are not reach or grasp related to control a robotic arm to reach to grasp through the use of operant conditioning, mimicking early development. We studied the development of a novel, coordinated behavior at the behavioral and cortical level, and the neural plasticity in M1 associated with learning to use a brain-machine interface.

  4. Tensile and bending fatigue of the adhesive interface to dentin.

    PubMed

    Belli, Renan; Baratieri, Luiz Narciso; Braem, Marc; Petschelt, Anselm; Lohbauer, Ulrich

    2010-12-01

    The aim of this study was to evaluate the fatigue limits of the dentin-composite interfaces established either with an etch-and-rinse or an one-step self-etch adhesive systems under tensile and bending configurations. Flat specimens (1.2 mm×5 mm×35 mm) were prepared using a plexiglass mold where dentin sections from human third molars were bonded to a resin composite, exhibiting the interface centrally located. Syntac Classic and G-Bond were used as adhesives and applied according to the manufacturer's instructions. The fluorochrome Rhodamine B was added to the adhesives to allow for fractographic evaluation. Tensile strength was measured in an universal testing machine and the bending strength (n=15) in a Flex machine (Flex, University of Antwerp, Belgium), respectively. Tensile (TFL) and bending fatigue limits (BFL) (n=25) were determined under wet conditions for 10(4) cycles following a staircase approach. Interface morphology and fracture mechanisms were observed using light, confocal laser scanning and scanning electron microscopy. Statistical analysis was performed using three-way ANOVA (mod LSD test, p<0.05). Tensile and bending characteristic strengths at 63.2% failure probability for Syntac were 23.8 MPa and 71.5 MPa, and 24.7 MPa and 72.3 MPa for G-Bond, respectively. Regarding the applied methods, no significant differences were detected between adhesives. However, fatigue limits for G-Bond (TFL=5.9 MPa; BFL=36.2 MPa) were significantly reduced when compared to Syntac (TFL=12.6 MPa; BFL=49.7 MPa). Fracture modes of Syntac were generally of adhesive nature, between the adhesive resin and dentin, while G-Bond showed fracture planes involving the adhesive-dentin interface and the adhesive resin. Cyclic loading under tensile and bending configurations led to a significant strength degradation, with a more pronounced fatigue limit decrease for G-Bond. The greater decrease in fracture strength was observed in the tensile configuration. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  5. Methodology for creating dedicated machine and algorithm on sunflower counting

    NASA Astrophysics Data System (ADS)

    Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand

    2007-09-01

    In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.

  6. Advanced warfighter machine interface (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Franks, Erin

    2005-05-01

    Future military crewmen may have more individual and shared tasks to complete throughout a mission as a result of smaller crew sizes and an increased number of technology interactions. To maintain reasonable workload levels, the Warfighter Machine Interface (WMI) must provide information in a consistent, logical manner, tailored to the environment in which the soldier will be completing their mission. This paper addresses design criteria for creating an advanced, multi-modal warfighter machine interface for on-the-move mounted operations. The Vetronics Technology Integration (VTI) WMI currently provides capabilities such as mission planning and rehearsal, voice and data communications, and manned/unmanned vehicle payload and mobility control. A history of the crewstation and more importantly, the WMI software will be provided with an overview of requirements and criteria used for completing the design. Multiple phases of field and laboratory testing provide the opportunity to evaluate the design and hardware in stationary and motion environments. Lessons learned related to system usability and user performance are presented with mitigation strategies to be tested in the future.

  7. Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective

    PubMed Central

    Mattout, Jérémie

    2012-01-01

    A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291

  8. Humanoids Designed to do Work

    NASA Technical Reports Server (NTRS)

    Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron

    2001-01-01

    NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.

  9. A hardware/software environment to support R D in intelligent machines and mobile robotic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less

  10. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  11. Selective visual attention to drive cognitive brain–machine interfaces: from concepts to neurofeedback and rehabilitation applications

    PubMed Central

    Astrand, Elaine; Wardak, Claire; Ben Hamed, Suliann

    2014-01-01

    Brain–machine interfaces (BMIs) using motor cortical activity to drive an external effector like a screen cursor or a robotic arm have seen enormous success and proven their great rehabilitation potential. An emerging parallel effort is now directed to BMIs controlled by endogenous cognitive activity, also called cognitive BMIs. While more challenging, this approach opens new dimensions to the rehabilitation of cognitive disorders. In the present work, we focus on BMIs driven by visuospatial attention signals and we provide a critical review of these studies in the light of the accumulated knowledge about the psychophysics, anatomy, and neurophysiology of visual spatial attention. Importantly, we provide a unique comparative overview of the several studies, ranging from non-invasive to invasive human and non-human primates studies, that decode attention-related information from ongoing neuronal activity. We discuss these studies in the light of the challenges attention-driven cognitive BMIs have to face. In a second part of the review, we discuss past and current attention-based neurofeedback studies, describing both the covert effects of neurofeedback onto neuronal activity and its overt behavioral effects. Importantly, we compare neurofeedback studies based on the amplitude of cortical activity to studies based on the enhancement of cortical information content. Last, we discuss several lines of future research and applications for attention-driven cognitive brain-computer interfaces (BCIs), including the rehabilitation of cognitive deficits, restored communication in locked-in patients, and open-field applications for enhanced cognition in normal subjects. The core motivation of this work is the key idea that the improvement of current cognitive BMIs for therapeutic and open field applications needs to be grounded in a proper interdisciplinary understanding of the physiology of the cognitive function of interest, be it spatial attention, working memory or any other cognitive signal. PMID:25161613

  12. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.

    PubMed

    Zander, Thorsten O; Kothe, Christian

    2011-04-01

    Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.

  13. Hand-in-hand advances in biomedical engineering and sensorimotor restoration.

    PubMed

    Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio

    2015-05-15

    Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Causal network in a deafferented non-human primate brain.

    PubMed

    Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G

    2015-01-01

    De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).

  15. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan

    This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less

  17. Remapping residual coordination for controlling assistive devices and recovering motor functions

    PubMed Central

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias; Mussa-Ivaldi, Ferdinando A.; Casadio, Maura

    2015-01-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any single well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human–machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user’s residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. PMID:26341935

  18. SOCR data dashboard: an integrated big data archive mashing medicare, labor, census and econometric information.

    PubMed

    Husain, Syed S; Kalinin, Alexandr; Truong, Anh; Dinov, Ivo D

    Intuitive formulation of informative and computationally-efficient queries on big and complex datasets present a number of challenges. As data collection is increasingly streamlined and ubiquitous, data exploration, discovery and analytics get considerably harder. Exploratory querying of heterogeneous and multi-source information is both difficult and necessary to advance our knowledge about the world around us. We developed a mechanism to integrate dispersed multi-source data and service the mashed information via human and machine interfaces in a secure, scalable manner. This process facilitates the exploration of subtle associations between variables, population strata, or clusters of data elements, which may be opaque to standard independent inspection of the individual sources. This a new platform includes a device agnostic tool (Dashboard webapp, http://socr.umich.edu/HTML5/Dashboard/) for graphical querying, navigating and exploring the multivariate associations in complex heterogeneous datasets. The paper illustrates this core functionality and serviceoriented infrastructure using healthcare data (e.g., US data from the 2010 Census, Demographic and Economic surveys, Bureau of Labor Statistics, and Center for Medicare Services) as well as Parkinson's Disease neuroimaging data. Both the back-end data archive and the front-end dashboard interfaces are continuously expanded to include additional data elements and new ways to customize the human and machine interactions. A client-side data import utility allows for easy and intuitive integration of user-supplied datasets. This completely open-science framework may be used for exploratory analytics, confirmatory analyses, meta-analyses, and education and training purposes in a wide variety of fields.

  19. Context-aware brain-computer interfaces: exploring the information space of user, technical system and environment

    NASA Astrophysics Data System (ADS)

    Zander, T. O.; Jatzev, S.

    2012-02-01

    Brain-computer interface (BCI) systems are usually applied in highly controlled environments such as research laboratories or clinical setups. However, many BCI-based applications are implemented in more complex environments. For example, patients might want to use a BCI system at home, and users without disabilities could benefit from BCI systems in special working environments. In these contexts, it might be more difficult to reliably infer information about brain activity, because many intervening factors add up and disturb the BCI feature space. One solution for this problem would be adding context awareness to the system. We propose to augment the available information space with additional channels carrying information about the user state, the environment and the technical system. In particular, passive BCI systems seem to be capable of adding highly relevant context information—otherwise covert aspects of user state. In this paper, we present a theoretical framework based on general human-machine system research for adding context awareness to a BCI system. Building on that, we present results from a study on a passive BCI, which allows access to the covert aspect of user state related to the perceived loss of control. This study is a proof of concept and demonstrates that context awareness could beneficially be implemented in and combined with a BCI system or a general human-machine system. The EEG data from this experiment are available for public download at www.phypa.org. Parts of this work have already been presented in non-journal publications. This will be indicated specifically by appropriate references in the text.

  20. Mental State Assessment and Validation Using Personalized Physiological Biometrics

    PubMed Central

    Patel, Aashish N.; Howard, Michael D.; Roach, Shane M.; Jones, Aaron P.; Bryant, Natalie B.; Robinson, Charles S. H.; Clark, Vincent P.; Pilly, Praveen K.

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k-fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  1. The future of the provision process for mobility assistive technology: a survey of providers.

    PubMed

    Dicianno, Brad E; Joseph, James; Eckstein, Stacy; Zigler, Christina K; Quinby, Eleanor J; Schmeler, Mark R; Schein, Richard M; Pearlman, Jon; Cooper, Rory A

    2018-03-20

    The purpose of this study was to evaluate the opinions of providers of mobility assistive technologies to help inform a research agenda and set priorities. This survey study was anonymous and gathered opinions of individuals who participate in the process to provide wheelchairs and other assistive technologies to clients. Participants were asked to rank the importance of developing various technologies and rank items against each other in terms of order of importance. Participants were also asked to respond to several open-ended questions or statements. A total of 161 providers from 35 states within the USA consented to participation and completed the survey. This survey revealed themes of advanced wheelchair design, assistive robotics and intelligent systems, human machine interfaces and smart device applications. It also outlined priorities for researchers to provide continuing education to clients and providers. These themes will be used to develop research and development priorities. Implications for Rehabilitation • Research in advanced wheelchair design is needed to facilitate travel and environmental access with wheelchairs and to develop alternative power sources for wheelchairs.• New assistive robotics and intelligent systems are needed to help wheelchairs overcome obstacles or self-adjust, assist wheelchair navigation in the community, assist caregivers and transfers, and aid ambulation.• Innovations in human machine interfaces may help advance the control of mobility devices and robots with the brain, eye movements, facial gesture recognition or other systems.• Development of new smart devices is needed for better control of the environment, monitoring activity and promoting healthy behaviours.

  2. Mental State Assessment and Validation Using Personalized Physiological Biometrics.

    PubMed

    Patel, Aashish N; Howard, Michael D; Roach, Shane M; Jones, Aaron P; Bryant, Natalie B; Robinson, Charles S H; Clark, Vincent P; Pilly, Praveen K

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k -fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  3. Illusory movement perception improves motor control for prosthetic hands

    PubMed Central

    Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.

    2018-01-01

    To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617

  4. Automated placement of interfaces in conformational kinetics calculations using machine learning

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc; Butts, Carter T.; Andricioaei, Ioan

    2017-10-01

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  5. Automated placement of interfaces in conformational kinetics calculations using machine learning.

    PubMed

    Grazioli, Gianmarc; Butts, Carter T; Andricioaei, Ioan

    2017-10-21

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  6. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator,more » three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  7. A Brain-Machine-Muscle Interface for Restoring Hindlimb Locomotion after Complete Spinal Transection in Rats

    PubMed Central

    Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang

    2014-01-01

    A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics. PMID:25084446

  8. Long-Term Training with a Brain-Machine Interface-Based Gait Protocol Induces Partial Neurological Recovery in Paraplegic Patients.

    PubMed

    Donati, Ana R C; Shokur, Solaiman; Morya, Edgard; Campos, Debora S F; Moioli, Renan C; Gitti, Claudia M; Augusto, Patricia B; Tripodi, Sandra; Pires, Cristhiane G; Pereira, Gislaine A; Brasil, Fabricio L; Gallo, Simone; Lin, Anthony A; Takigami, Angelo K; Aratanha, Maria A; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A L

    2016-08-11

    Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3-13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage.

  9. Long-Term Training with a Brain-Machine Interface-Based Gait Protocol Induces Partial Neurological Recovery in Paraplegic Patients

    PubMed Central

    Donati, Ana R. C.; Shokur, Solaiman; Morya, Edgard; Campos, Debora S. F.; Moioli, Renan C.; Gitti, Claudia M.; Augusto, Patricia B.; Tripodi, Sandra; Pires, Cristhiane G.; Pereira, Gislaine A.; Brasil, Fabricio L.; Gallo, Simone; Lin, Anthony A.; Takigami, Angelo K.; Aratanha, Maria A.; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A. L.

    2016-01-01

    Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3–13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage. PMID:27513629

  10. Sensory motor remapping of space in human–machine interfaces

    PubMed Central

    Mussa-Ivaldi, Ferdinando A.; Casadio, Maura; Danziger, Zachary C.; Mosier, Kristine M.; Scheidt, Robert A.

    2012-01-01

    Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human–machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices. PMID:21741543

  11. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  12. A Brain-Machine Interface Based on ERD/ERS for an Upper-Limb Exoskeleton Control.

    PubMed

    Tang, Zhichuan; Sun, Shouqian; Zhang, Sanyuan; Chen, Yumiao; Li, Chao; Chen, Shi

    2016-12-02

    To recognize the user's motion intention, brain-machine interfaces (BMI) usually decode movements from cortical activity to control exoskeletons and neuroprostheses for daily activities. The aim of this paper is to investigate whether self-induced variations of the electroencephalogram (EEG) can be useful as control signals for an upper-limb exoskeleton developed by us. A BMI based on event-related desynchronization/synchronization (ERD/ERS) is proposed. In the decoder-training phase, we investigate the offline classification performance of left versus right hand and left hand versus both feet by using motor execution (ME) or motor imagery (MI). The results indicate that the accuracies of ME sessions are higher than those of MI sessions, and left hand versus both feet paradigm achieves a better classification performance, which would be used in the online-control phase. In the online-control phase, the trained decoder is tested in two scenarios (wearing or without wearing the exoskeleton). The MI and ME sessions wearing the exoskeleton achieve mean classification accuracy of 84.29% ± 2.11% and 87.37% ± 3.06%, respectively. The present study demonstrates that the proposed BMI is effective to control the upper-limb exoskeleton, and provides a practical method by non-invasive EEG signal associated with human natural behavior for clinical applications.

  13. A brain-machine-muscle interface for restoring hindlimb locomotion after complete spinal transection in rats.

    PubMed

    Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang

    2014-01-01

    A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics.

  14. Brain-machine interfaces: electrophysiological challenges and limitations.

    PubMed

    Lega, Bradley C; Serruya, Mijail D; Zaghloul, Kareem A

    2011-01-01

    Brain-machine interfaces (BMI) seek to directly communicate with the human nervous system in order to diagnose and treat intrinsic neurological disorders. While the first generation of these devices has realized significant clinical successes, they often rely on gross electrical stimulation using empirically derived parameters through open-loop mechanisms of action that are not yet fully understood. Their limitations reflect the inherent challenge in developing the next generation of these devices. This review identifies lessons learned from the first generation of BMI devices (chiefly deep brain stimulation), identifying key problems for which the solutions will aid the development of the next generation of technologies. Our analysis examines four hypotheses for the mechanism by which brain stimulation alters surrounding neurophysiologic activity. We then focus on motor prosthetics, describing various approaches to overcoming the problems of decoding neural signals. We next turn to visual prosthetics, an area for which the challenges of signal coding to match neural architecture has been partially overcome. Finally, we close with a review of cortical stimulation, examining basic principles that will be incorporated into the design of future devices. Throughout the review, we relate the issues of each specific topic to the common thread of BMI research: translating new knowledge of network neuroscience into improved devices for neuromodulation.

  15. Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.

    ERIC Educational Resources Information Center

    Crehange, M.; And Others

    1989-01-01

    Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…

  16. Highly sensitive strain sensors based on fragmentized carbon nanotube/polydimethylsiloxane composites.

    PubMed

    Gao, Yang; Fang, Xiaoliang; Tan, Jianping; Lu, Ting; Pan, Likun; Xuan, Fuzhen

    2018-06-08

    Wearable strain sensors based on nanomaterial/elastomer composites have potential applications in flexible electronic skin, human motion detection, human-machine interfaces, etc. In this research, a type of high performance strain sensors has been developed using fragmentized carbon nanotube/polydimethylsiloxane (CNT/PDMS) composites. The CNT/PDMS composites were ground into fragments, and a liquid-induced densification method was used to fabricate the strain sensors. The strain sensors showed high sensitivity with gauge factors (GFs) larger than 200 and a broad strain detection range up to 80%, much higher than those strain sensors based on unfragmentized CNT/PDMS composites (GF < 1). The enhanced sensitivity of the strain sensors is ascribed to the sliding of individual fragmentized-CNT/PDMS-composite particles during mechanical deformation, which causes significant resistance change in the strain sensors. The strain sensors can differentiate mechanical stimuli and monitor various human body motions, such as bending of the fingers, human breathing, and blood pulsing.

  17. Avatars and virtual agents – relationship interfaces for the elderly

    PubMed Central

    2017-01-01

    In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725

  18. A small, cheap, and portable reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey

    2005-05-01

    While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.

  19. A Formal Characterization of Relevant Information in Multi-Agent Systems

    DTIC Science & Technology

    2009-10-01

    Conference iTrust. (2004) [17] Sadek, D.: Le dialogue homme-machine : de l’ ergonomie des interfaces à l’ agent intelligent dia- loguant. In: Nouvelles interfaces hommemachine, Lavoisier Editeur, Arago 18 (1996) 277–321

  20. Charting the energy landscape of metal/organic interfaces via machine learning

    NASA Astrophysics Data System (ADS)

    Scherbela, Michael; Hörmann, Lukas; Jeindl, Andreas; Obersteiner, Veronika; Hofmann, Oliver T.

    2018-04-01

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. In this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. We demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  1. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE PAGES

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas; ...

    2018-04-17

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  2. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  3. Man-machine interfaces in health care

    NASA Technical Reports Server (NTRS)

    Charles, Steve; Williams, Roy E.

    1991-01-01

    The surgeon, like the pilot, is confronted with an ever increasing volume of voice, data, and image input. Simultaneously, the surgeon must control a rapidly growing number of devices to deliver care to the patient. The broad disciplines of man-machine interface design, systems integration, and teleoperation will play a role in the operating room of the future. The purpose of this communication is to report the incorporation of these design concepts into new surgical and laser delivery systems. A review of each general problem area and the systems under development to solve the problems are presented.

  4. Cutting Modeling of Hybrid CFRP/Ti Composite with Induced Damage Analysis

    PubMed Central

    Xu, Jinyang; El Mansori, Mohamed

    2016-01-01

    In hybrid carbon fiber reinforced polymer (CFRP)/Ti machining, the bi-material interface is the weakest region vulnerable to severe damage formation when the tool cutting from one phase to another phase and vice versa. The interface delamination as well as the composite-phase damage is the most serious failure dominating the bi-material machining. In this paper, an original finite element (FE) model was developed to inspect the key mechanisms governing the induced damage formation when cutting this multi-phase material. The hybrid composite model was constructed by establishing three disparate physical constituents, i.e., the Ti phase, the interface, and the CFRP phase. Different constitutive laws and damage criteria were implemented to build up the entire cutting behavior of the bi-material system. The developed orthogonal cutting (OC) model aims to characterize the dynamic mechanisms of interface delamination formation and the affected interface zone (AIZ). Special focus was made on the quantitative analyses of the parametric effects on the interface delamination and composite-phase damage. The numerical results highlighted the pivotal role of AIZ in affecting the formation of interface delamination, and the significant impacts of feed rate and cutting speed on delamination extent and fiber/matrix failure. PMID:28787824

  5. VOTable JAVA Streaming Writer and Applications.

    NASA Astrophysics Data System (ADS)

    Kulkarni, P.; Kembhavi, A.; Kale, S.

    2004-07-01

    Virtual Observatory related tools use a new standard for data transfer called the VOTable format. This is a variant of the xml format that enables easy transfer of data over the web. We describe a streaming interface that can bridge the VOTable format, through a user friendly graphical interface, with the FITS and ASCII formats, which are commonly used by astronomers. A streaming interface is important for efficient use of memory because of the large size of catalogues. The tools are developed in JAVA to provide a platform independent interface. We have also developed a stand-alone version that can be used to convert data stored in ASCII or FITS format on a local machine. The Streaming writer is successfully being used in VOPlot (See Kale et al 2004 for a description of VOPlot).We present the test results of converting huge FITS and ASCII data into the VOTable format on machines that have only limited memory.

  6. Human-machine interface issues in the use of helmet-mounted displays in short conjugate simulators

    NASA Astrophysics Data System (ADS)

    Melzer, James E.

    2011-06-01

    With the introduction of helmet-mounted displays (HMD) into modern aircraft, there is a desire on the part of pilot trainees to achieve a "look and feel" for the simulation environment similar to the real flight hardware. Given this requirement for high fidelity, it may be necessary to configure - or to perhaps re-configure - the HMD for a short conjugate viewing distance and to do so without causing eye strain or other adverse physiological effects. This paper will survey the human factors literature and provide an analysis on the visual construct issues of focus and vergence which - if not properly configured for the short conjugate simulator - could cause adverse effects, which can negatively affect training.

  7. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  8. Feature selection in classification of eye movements using electrooculography for activity recognition.

    PubMed

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  9. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  10. Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants.

    PubMed

    Sanchez, Justin C; Mahmoudi, Babak; DiGiovanna, Jack; Principe, Jose C

    2009-04-01

    The success of brain-machine interfaces (BMI) is enabled by the remarkable ability of the brain to incorporate the artificial neuroprosthetic 'tool' into its own cognitive space and use it as an extension of the user's body. Unlike other tools, neuroprosthetics create a shared space that seamlessly spans the user's internal goal representation of the world and the external physical environment enabling a much deeper human-tool symbiosis. A key factor in the transformation of 'simple tools' into 'intelligent tools' is the concept of co-adaptation where the tool becomes functionally involved in the extraction and definition of the user's goals. Recent advancements in the neuroscience and engineering of neuroprosthetics are providing a blueprint for how new co-adaptive designs based on reinforcement learning change the nature of a user's ability to accomplish tasks that were not possible using conventional methodologies. By designing adaptive controls and artificial intelligence into the neural interface, tools can become active assistants in goal-directed behavior and further enhance human performance in particular for the disabled population. This paper presents recent advances in computational and neural systems supporting the development of symbiotic neuroprosthetic assistants.

  11. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025

  12. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.

  13. Manufacturing, assembling and packaging of miniaturized implants for neural prostheses and brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Stieglitz, Thomas

    2009-05-01

    Implantable medical devices to interface with muscles, peripheral nerves, and the brain have been developed for many applications over the last decades. They have been applied in fundamental neuroscientific studies as well as in diagnosis, therapy and rehabilitation in clinical practice. Success stories of these implants have been written with help of precision mechanics manufacturing techniques. Latest cutting edge research approaches to restore vision in blind persons and to develop an interface with the human brain as motor control interface, however, need more complex systems and larger scales of integration and higher degrees of miniaturization. Microsystems engineering offers adequate tools, methods, and materials but so far, no MEMS based active medical device has been transferred into clinical practice. Silicone rubber, polyimide, parylene as flexible materials and silicon and alumina (aluminum dioxide ceramics) as substrates and insulation or packaging materials, respectively, and precious metals as electrodes have to be combined to systems that do not harm the biological target structure and have to work reliably in a wet environment with ions and proteins. Here, different design, manufacturing and packaging paradigms will be presented and strengths and drawbacks will be discussed in close relation to the envisioned biological and medical applications.

  14. A novel asynchronous access method with binary interfaces

    PubMed Central

    2008-01-01

    Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797

  15. Cockpit automation - In need of a philosophy

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1985-01-01

    Concern has been expressed over the rapid development and deployment of automatic devices in transport aircraft, due mainly to the human interface and particularly the role of automation in inducing human error. The paper discusses the need for coherent philosophies of automation, and proposes several approaches: (1) flight management by exception, which states that as long as a crew stays within the bounds of regulations, air traffic control and flight safety, it may fly as it sees fit; (2) exceptions by forecasting, where the use of forecasting models would predict boundary penetration, rather than waiting for it to happen; (3) goal-sharing, where a computer is informed of overall goals, and subsequently has the capability of checking inputs and aircraft position for consistency with the overall goal or intentions; and (4) artificial intelligence and expert systems, where intelligent machines could mimic human reason.

  16. Symposium on Aviation Psychology, 1st, Ohio State University, Columbus, OH, April 21, 22, 1981, Proceedings

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The impact of modern technology on the role, responsibility, authority, and performance of human operators in modern aircraft and ATC systems was examined in terms of principles defined by Paul Fitts. Research into human factors in aircraft operations and the use of human factors engineering for aircraft safety improvements were discussed, and features of the man-machine interface in computerized cockpit warning systems are examined. The design and operational features of computerized avionics displays and HUDs are described, along with results of investigations into pilot decision-making behavior, aircrew procedural compliance, and aircrew judgment training programs. Experiments in vision and visual perception are detailed, as are behavioral studies of crew workload, coordination, and complement. The effectiveness of pilot selection, screening, and training techniques are assessed, as are methods for evaluating pilot performance.

  17. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  18. Virtual reality systems

    NASA Technical Reports Server (NTRS)

    Johnson, David W.

    1992-01-01

    Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.

  19. Thermocouple and infrared sensor-based measurement of temperature distribution in metal cutting.

    PubMed

    Kus, Abdil; Isik, Yahya; Cakir, M Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-12

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining.

  20. Thermocouple and Infrared Sensor-Based Measurement of Temperature Distribution in Metal Cutting

    PubMed Central

    Kus, Abdil; Isik, Yahya; Cakir, M. Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-01

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining. PMID:25587976

  1. Analysis in Motion Initiative – Human Machine Intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie

    As computers and machines become more pervasive in our everyday lives, we are looking for ways for humans and machines to work more intelligently together. How can we help machines understand their users so the team can do smarter things together? The Analysis in Motion Initiative is advancing the science of human machine intelligence — creating human-machine teams that work better together to make correct, useful, and timely interpretations of data.

  2. Multiprocessor Z-Buffer Architecture for High-Speed, High Complexity Computer Image Generation.

    DTIC Science & Technology

    1983-12-01

    Oversampling 50 17. "Poking Through" Effects 51 18. Sampling Paths 52 19. Triangle Variables 54 20. Intelligent Tiling Algorithm 61 21. Tiler Functional Blocks...64 * 22. HSD Interface 65 23. Tiling Machine Setup 67 24. Tiling Machine 68 25. Tile Accumulate 69 26. A lx$ Sorting Machine 77 27. A 2x8 Sorting...Delay 227 87. Effect of Triangle Size on Tiler Throughput Rates 229 88. Tiling Machine Setup Stage Performance for Oversample Mode 234 89. Tiling

  3. Intelligible machine learning with malibu.

    PubMed

    Langlois, Robert E; Lu, Hui

    2008-01-01

    malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.

  4. Method and system for providing work machine multi-functional user interface

    DOEpatents

    Hoff, Brian D [Peoria, IL; Akasam, Sivaprasad [Peoria, IL; Baker, Thomas M [Peoria, IL

    2007-07-10

    A method is performed to provide a multi-functional user interface on a work machine for displaying suggested corrective action. The process includes receiving status information associated with the work machine and analyzing the status information to determine an abnormal condition. The process also includes displaying a warning message on the display device indicating the abnormal condition and determining one or more corrective actions to handle the abnormal condition. Further, the process includes determining an appropriate corrective action among the one or more corrective actions and displaying a recommendation message on the display device reflecting the appropriate corrective action. The process may also include displaying a list including the remaining one or more corrective actions on the display device to provide alternative actions to an operator.

  5. The SmartHand transradial prosthesis

    PubMed Central

    2011-01-01

    Background Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. Methods SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. Results SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Conclusions Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies. PMID:21600048

  6. Semantic Annotations and Querying of Web Data Sources

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  7. Big data, little security: Addressing security issues in your platform

    NASA Astrophysics Data System (ADS)

    Macklin, Thomas; Mathews, Joseph

    2017-05-01

    This paper describes some patterns for information security problems that consistently emerge among traditional enterprise networks and applications, both with respect to cyber threats and data sensitivity. We draw upon cases from qualitative studies and interviews of system developers, network operators, and certifiers of military applications. Specifically, the problems discussed involve sensitivity of data aggregates, training efficacy, and security decision support in the human machine interface. While proven techniques can address many enterprise security challenges, we provide additional recommendations on how to further improve overall security posture, and suggest additional research thrusts to address areas where known gaps remain.

  8. Acquisition and production of skilled behavior in dynamic decision-making tasks

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1992-01-01

    Currently, two main approaches exist for improving the human-machine interface component of a system in order to improve overall system performance - display enhancement and intelligent decision making. Discussed here are the characteristic issues of these two decision-making strategies. Differences in expert and novice decision making are described in order to help determine whether a particular strategy may be better for a particular type of user. Research is outlined to compare and contrast the two technologies, as well as to examine the interaction effects introduced by the different skill levels and the different methods for training operators.

  9. Hacker tracking Security system for HMI

    NASA Astrophysics Data System (ADS)

    Chauhan, Rajeev Kumar

    2011-12-01

    Conventional Supervisory control and data Acquisition (SCADA) systems use PC, notebook, thin client, and PDA as a Client. Nowadays the Process Industries are following multi shift system that's why multi- client of different category have to work at a single human Machine Interface (HMI). They may hack the HMI Display and change setting of the other client. This paper introduces a Hacker tracking security (HTS) System for HMI. This is developed by using the conventional and Biometric authentication. HTS system is developed by using Numeric passwords, Smart card, biometric, blood flow and Finger temperature. This work is also able to identify the hackers.

  10. Comparison of Human and Machine Scoring of Essays: Differences by Gender, Ethnicity, and Country

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Trapani, Catherine; Attali, Yigal

    2012-01-01

    Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…

  11. ODC-Free Solvent Implementation for Phenolics Cleaning

    NASA Technical Reports Server (NTRS)

    Wurth, Laura; Biegert, Lydia; Lamont, DT; McCool, Alex (Technical Monitor)

    2001-01-01

    During phenolic liner manufacture, resin-impregnated (pre-preg) bias tape of silica, glass, or carbon cloth is tape-wrapped, cured, machined, and then wiped with 1,1,1 tri-chloroethane (TCA) to remove contaminants that may have been introduced during machining and handling. Following the TCA wipe, the machined surface is given a resin wet-coat and over-wrapped with more prepreg and cured. A TCA replacement solvent for these wiping operations must effectively remove both surface contaminants, and sub-surface oils and greases while not compromising the integrity of this interface. Selection of a TCA replacement solvent for phenolic over-wrap interface cleaning began with sub-scale compatibility tests with cured phenolics. Additional compatibility tests included assessment of solvent retention in machined phenolic surfaces. Results from these tests showed that, while the candidate solvent did not degrade the cured phenolics, it was retained in higher concentrations than TCA in phenolic surfaces. This effect was most pronounced with glass and silica cloth phenolics with steep ply angles relative to the wiped surfaces.

  12. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  13. A Framework to Guide the Assessment of Human-Machine Systems.

    PubMed

    Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo

    2017-03-01

    We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.

  14. Personal mobility and manipulation using robotics, artificial intelligence and advanced control.

    PubMed

    Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu

    2007-01-01

    Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.

  15. Bringing UAVs to the fight: recent army autonomy research and a vision for the future

    NASA Astrophysics Data System (ADS)

    Moorthy, Jay; Higgins, Raymond; Arthur, Keith

    2008-04-01

    The Unmanned Autonomous Collaborative Operations (UACO) program was initiated in recognition of the high operational burden associated with utilizing unmanned systems by both mounted and dismounted, ground and airborne warfighters. The program was previously introduced at the 62nd Annual Forum of the American Helicopter Society in May of 20061. This paper presents the three technical approaches taken and results obtained in UACO. All three approaches were validated extensively in contractor simulations, two were validated in government simulation, one was flight tested outside the UACO program, and one was flight tested in Part 2 of UACO. Results and recommendations are discussed regarding diverse areas such as user training and human-machine interface, workload distribution, UAV flight safety, data link bandwidth, user interface constructs, adaptive algorithms, air vehicle system integration, and target recognition. Finally, a vision for UAV As A Wingman is presented.

  16. ATM Technology Demonstration-1 Phase II Boeing Configurable Graphical Display (CGD) Software Design Description

    NASA Technical Reports Server (NTRS)

    Wilber, George F.

    2017-01-01

    This Software Description Document (SDD) captures the design for developing the Flight Interval Management (FIM) system Configurable Graphics Display (CGD) software. Specifically this SDD describes aspects of the Boeing CGD software and the surrounding context and interfaces. It does not describe the Honeywell components of the CGD system. The SDD provides the system overview, architectural design, and detailed design with all the necessary information to implement the Boeing components of the CGD software and integrate them into the CGD subsystem within the larger FIM system. Overall system and CGD system-level requirements are derived from the CGD SRS (in turn derived from the Boeing System Requirements Design Document (SRDD)). Display and look-and-feel requirements are derived from Human Machine Interface (HMI) design documents and working group recommendations. This Boeing CGD SDD is required to support the upcoming Critical Design Review (CDR).

  17. A strain-absorbing design for tissue-machine interfaces using a tunable adhesive gel.

    PubMed

    Lee, Sungwon; Inoue, Yusuke; Kim, Dongmin; Reuveny, Amir; Kuribara, Kazunori; Yokota, Tomoyuki; Reeder, Jonathan; Sekino, Masaki; Sekitani, Tsuyoshi; Abe, Yusuke; Someya, Takao

    2014-12-19

    To measure electrophysiological signals from the human body, it is essential to establish stable, gentle and nonallergic contacts between the targeted biological tissue and the electrical probes. However, it is difficult to form a stable interface between the two for long periods, especially when the surface of the biological tissue is wet and/or the tissue exhibits motion. Here we resolve this difficulty by designing and fabricating smart, stress-absorbing electronic devices that can adhere to wet and complex tissue surfaces and allow for reliable, long-term measurements of vital signals. We demonstrate a multielectrode array, which can be attached to the surface of a rat heart, resulting in good conformal contact for more than 3 h. Furthermore, we demonstrate arrays of highly sensitive, stretchable strain sensors using a similar design. Ultra-flexible electronics with enhanced adhesion to tissue could enable future applications in chronic in vivo monitoring of biological signals.

  18. An online hybrid brain-computer interface combining multiple physiological signals for webpage browse.

    PubMed

    Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming

    2015-08-01

    The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.

  19. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    PubMed

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  20. Electric Motors Maintenance Planning From Its Operating Variables

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco; Fonseca, Inácio; Farinha, José Torres; Ferreira, Luís; Galar, Diego

    2017-09-01

    The maintenance planning corresponds to an approach that seeks to maximize the availability of equipment and, consequently, increase the levels of competitiveness of companies by increasing production times. This paper presents a maintenance planning based on operating variables (number of hours worked, duty cycles, number of revolutions) to maximizing the availability of operation of electrical motors. The reading of the operating variables and its sampling is done based on predetermined sampling cycles and subsequently is made the data analysis through time series algorithms aiming to launch work orders before reaching the variables limit values. This approach is supported by tools and technologies such as logical applications that enable a graphical user interface for access to relevant information about their Physical Asset HMI (Human Machine Interface), including the control and supervision by acquisition through SCADA (Supervisory Control And data acquisition) data, also including the communication protocols among different logical applications.

Top