Sample records for human-machine interface evaluation

  1. Human Machine Interfaces for Teleoperators and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)

    1991-01-01

    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.

  2. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  3. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  4. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  5. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  6. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  7. Formal verification of human-automation interaction

    NASA Technical Reports Server (NTRS)

    Degani, Asaf; Heymann, Michael

    2002-01-01

    This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.

  8. Future Cyborgs: Human-Machine Interface for Virtual Reality Applications

    DTIC Science & Technology

    2007-04-01

    FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford

  9. DESIGN AND EVALUATION OF INDIVIDUAL ELEMENTS OF THE INTERFACE FOR AN AGRICULTURAL MACHINE.

    PubMed

    Rakhra, Aadesh K; Mann, Danny D

    2018-01-29

    If a user-centered approach is not used to design information displays, the quantity and quality of information presented to the user may not match the needs of the user, or it may exceed the capability of the human operator for processing and using that information. The result may be an excessive mental workload and reduced situation awareness of the operator, which can negatively affect the machine performance and operational outcomes. The increasing use of technology in agricultural machines may expose the human operator to excessive and undesirable information if the operator's information needs and information processing capabilities are ignored. In this study, a user-centered approach was used to design specific interface elements for an agricultural air seeder. Designs of the interface elements were evaluated in a laboratory environment by developing high-fidelity prototypes. Evaluations of the user interface elements yielded significant improvement in situation awareness (up to 11%; overall mean difference = 5.0 (4.8%), 95% CI (6.4728, 3.5939), p 0.0001). Mental workload was reduced by up to 19.7% (overall mean difference = -5.2 (-7.9%), n = 30, a = 0.05). Study participants rated the overall performance of the newly designed user-centered interface elements higher in comparison to the previous designs (overall mean difference = 27.3 (189.8%), 99% CI (35.150, 19.384), p 0.0001. Copyright© by the American Society of Agricultural Engineers.

  10. Man-systems integration and the man-machine interface

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1990-01-01

    Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).

  11. Development and validation of methods for man-made machine interface evaluation. [for shuttles and shuttle payloads

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Micocci, A.

    1975-01-01

    The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.

  12. Human perceptual deficits as factors in computer interface test and evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less

  13. Gloved Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  14. Measuring human performance on NASA's microgravity aircraft

    NASA Technical Reports Server (NTRS)

    Morris, Randy B.; Whitmore, Mihriban

    1993-01-01

    Measuring human performance in a microgravity environment will aid in identifying the design requirements, human capabilities, safety, and productivity of future astronauts. The preliminary understanding of the microgravity effects on human performance can be achieved through evaluations conducted onboard NASA's KC-135 aircraft. These evaluations can be performed in relation to hardware performance, human-hardware interface, and hardware integration. Measuring human performance in the KC-135 simulated environment will contribute to the efforts of optimizing the human-machine interfaces for future and existing space vehicles. However, there are limitations, such as limited number of qualified subjects, unexpected hardware problems, and miscellaneous plane movements which must be taken into consideration. Examples for these evaluations, the results, and their implications are discussed in the paper.

  15. Learning Machine, Vietnamese Based Human-Computer Interface.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…

  16. Multiple man-machine interfaces

    NASA Technical Reports Server (NTRS)

    Stanton, L.; Cook, C. W.

    1981-01-01

    The multiple man machine interfaces inherent in military pilot training, their social implications, and the issue of possible negative feedback were explored. Modern technology has produced machines which can see, hear, and touch with greater accuracy and precision than human beings. Consequently, the military pilot is more a systems manager, often doing battle against a target he never sees. It is concluded that unquantifiable human activity requires motivation that is not intrinsic in a machine.

  17. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    NASA Astrophysics Data System (ADS)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  18. Delivering key signals to the machine: seeking the electric signal that muscles emanate

    NASA Astrophysics Data System (ADS)

    Bani Hashim, A. Y.; Maslan, M. N.; Izamshah, R.; Mohamad, I. S.

    2014-11-01

    Due to the limitation of electric power generation in the human body, present human-machine interfaces have not been successful because of the nature of standard electronics circuit designs, which do not consider the specifications of signals that resulted from the skin. In general, the outcomes and applications of human-machine interfaces are limited to custom-designed subsystems, such as neuroprosthesis. We seek to model the bio dynamical of sub skin into equivalent mathematical definitions, descriptions, and theorems. Within the human skin, there are networks of nerves that permit the skin to function as a multi dimension transducer. We investigate the nature of structural skin. Apart from multiple networks of nerves, there are other segments within the skin such as minute muscles. We identify the segments that are active when there is an electromyography activity. When the nervous system is firing signals, the muscle is being stimulated. We evaluate the phenomena of biodynamic of the muscles that is concerned with the electromyography activity of the nervous system. In effect, we design a relationship between the human somatosensory and synthetic systems sensory as the union of a complete set of the new domain of the functional system. This classifies electromyogram waveforms linked to intent thought of an operator. The system will become the basis for delivering key signals to machine such that the machine is under operator's intent, hence slavery.

  19. A Cognitive Systems Engineering Approach to Developing Human Machine Interface Requirements for New Technologies

    NASA Astrophysics Data System (ADS)

    Fern, Lisa Carolynn

    This dissertation examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will be deployed into complex systems. A key question for new technologies with increasingly capable automation, is how work will be accomplished by human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by designers. The human machine interface (HMI), which is intended to facilitate human-machine interaction and cooperation, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the predicted performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements for a detect and avoid (DAA) system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to a lack of exploration of different approaches to human-automation cooperation. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed. This document concludes with a look at both the importance of, and the challenges facing, the inclusion of examining human-automation coordination issues as part of the safety assurance activities of new technologies.

  20. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  1. Techniques and applications for binaural sound manipulation in human-machine interfaces

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  2. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  3. Knowledge-based load leveling and task allocation in human-machine systems

    NASA Technical Reports Server (NTRS)

    Chignell, M. H.; Hancock, P. A.

    1986-01-01

    Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.

  4. A Function-Behavior-State Approach to Designing Human Machine Interface for Nuclear Power Plant Operators

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Zhang, W. J.

    2005-02-01

    This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.

  5. Human machine interface display design document.

    DOT National Transportation Integrated Search

    2008-01-01

    The purpose of this document is to describe the design for the human machine interface : (HMI) display for the Next Generation 9-1-1 (NG9-1-1) System (or system of systems) : based on the initial Tier 1 requirements identified for the NG9-1-1 S...

  6. EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.

    PubMed

    Yin, Yue H; Fan, Yuan J; Xu, Li D

    2012-07-01

    Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.

  7. Analysis of operational comfort in manual tasks using human force manipulability measure.

    PubMed

    Tanaka, Yoshiyuki; Nishikawa, Kazuo; Yamada, Naoki; Tsuji, Toshio

    2015-01-01

    This paper proposes a scheme for human force manipulability (HFM) based on the use of isometric joint torque properties to simulate the spatial characteristics of human operation forces at an end-point of a limb with feasible magnitudes for a specified limb posture. This is also applied to the evaluation/prediction of operational comfort (OC) when manually operating a human-machine interface. The effectiveness of HFM is investigated through two experiments and computer simulations of humans generating forces by using their upper extremities. Operation force generation with maximum isometric effort can be roughly estimated with an HFM measure computed from information on the arm posture during a maintained posture. The layout of a human-machine interface is then discussed based on the results of operational experiments using an electric gear-shifting system originally developed for robotic devices. The results indicate a strong relationship between the spatial characteristics of the HFM and OC levels when shifting, and the OC is predicted by using a multiple regression model with HFM measures.

  8. Design Control Systems of Human Machine Interface in the NTVS-2894 Seat Grinder Machine to Increase the Productivity

    NASA Astrophysics Data System (ADS)

    Ardi, S.; Ardyansyah, D.

    2018-02-01

    In the Manufacturing of automotive spare parts, increased sales of vehicles is resulted in increased demand for production of engine valve of the customer. To meet customer demand, we carry out improvement and overhaul of the NTVS-2894 seat grinder machine on a machining line. NTVS-2894 seat grinder machine has been decreased machine productivity, the amount of trouble, and the amount of downtime. To overcome these problems on overhaul the NTVS-2984 seat grinder machine include mechanical and programs, is to do the design and manufacture of HMI (Human Machine Interface) GP-4501T program. Because of the time prior to the overhaul, NTVS-2894 seat grinder machine does not have a backup HMI (Human Machine Interface) program. The goal of the design and manufacture in this program is to improve the achievement of production, and allows an operator to operate beside it easier to troubleshoot the NTVS-2894 seat grinder machine thereby reducing downtime on the NTVS-2894 seat grinder machine. The results after the design are HMI program successfully made it back, machine productivity increased by 34.8%, the amount of trouble, and downtime decreased 40% decrease from 3,160 minutes to 1,700 minutes. The implication of our design, it could facilitate the operator in operating machine and the technician easer to maintain and do the troubleshooting the machine problems.

  9. All printed touchless human-machine interface based on only five functional materials

    NASA Astrophysics Data System (ADS)

    Scheipl, G.; Zirkl, M.; Sawatdee, A.; Helbig, U.; Krause, M.; Kraker, E.; Andersson Ersman, P.; Nilsson, D.; Platt, D.; Bodö, P.; Bauer, S.; Domann, G.; Mogessie, A.; Hartmann, Paul; Stadlober, B.

    2012-02-01

    We demonstrate the printing of a complex smart integrated system using only five functional inks: the fluoropolymer P(VDF:TrFE) (Poly(vinylidene fluoride trifluoroethylene) sensor ink, the conductive polymer PEDOT:PSS (poly(3,4 ethylenedioxythiophene):poly(styrene sulfonic acid) ink, a conductive carbon paste, a polymeric electrolyte and SU8 for separation. The result is a touchless human-machine interface, including piezo- and pyroelectric sensor pixels (sensitive to pressure changes and impinging infrared light), transistors for impedance matching and signal conditioning, and an electrochromic display. Applications may not only emerge in human-machine interfaces, but also in transient temperature or pressure sensing used in safety technology, in artificial skins and in disposable sensor labels.

  10. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing.

    PubMed

    Park, Ki-Woong; Lee, Younho; Baek, Sung Hoon

    2017-08-08

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing ( T-Wing ), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing , we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude.

  11. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  12. People, planners and policy: is there an interface?

    Treesearch

    Susan Kopka

    1979-01-01

    This research attempts to isolate some of the dimensions of human evaluations/perceptions of the built environment through the use of an Audience Response Machine and a video tape of environmental scenes. The results suggest that there are commonalities in peoples' evaluations/perceptions and that this type of inquiry has prescriptive value for design/planning....

  13. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  14. Human-centered automation and AI - Ideas, insights, and issues from the Intelligent Cockpit Aids research effort

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.; Schutte, Paul C.

    1989-01-01

    A development status evaluation is presented for the NASA-Langley Intelligent Cockpit Aids research program, which encompasses AI, human/machine interfaces, and conventional automation. Attention is being given to decision-aiding concepts for human-centered automation, with emphasis on inflight subsystem fault management, inflight mission replanning, and communications management. The cockpit envisioned is for advanced commercial transport aircraft.

  15. MARTI: man-machine animation real-time interface

    NASA Astrophysics Data System (ADS)

    Jones, Christian M.; Dlay, Satnam S.

    1997-05-01

    The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.

  16. Literate Specification: Using Design Rationale To Support Formal Methods in the Development of Human-Machine Interfaces.

    ERIC Educational Resources Information Center

    Johnson, Christopher W.

    1996-01-01

    The development of safety-critical systems (aircraft cockpits and reactor control rooms) is qualitatively different from that of other interactive systems. These differences impose burdens on design teams that must ensure the development of human-machine interfaces. Analyzes strengths and weaknesses of formal methods for the design of user…

  17. [Mechatronic in functional endoscopic sinus surgery. First experiences with the daVinci Telemanipulatory System].

    PubMed

    Strauss, G; Winkler, D; Jacobs, S; Trantakis, C; Dietz, A; Bootz, F; Meixensberger, J; Falk, V

    2005-07-01

    This study examines the advantages and disadvantages of a commercial telemanipulator system (daVinci, Intuitive Surgical, USA) with computer-guided instruments in functional endoscopic sinus surgery (FESS). We performed five different surgical FESS steps on 14 anatomical preparation and compared them with conventional FESS. A total of 140 procedures were examined taking into account the following parameters: degrees of freedom (DOF), duration , learning curve, force feedback, human-machine-interface. Telemanipulatory instruments have more DOF available then conventional instrumentation in FESS. The average time consumed by configuration of the telemanipulator is around 9+/-2 min. Missing force feedback is evaluated mainly as a disadvantage of the telemanipulator. Scaling was evaluated as helpful. The ergonomic concept seems to be better than the conventional solution. Computer guided instruments showed better results for the available DOF of the instruments. The human-machine-interface is more adaptable and variable then in conventional instrumentation. Motion scaling and indexing are characteristics of the telemanipulator concept which are helpful for FESS in our study.

  18. Adapting human-machine interfaces to user performance.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2008-01-01

    The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.

  19. User-Based Information Retrieval System Interface Evaluation: An Examination of an On-Line Public Access Catalog.

    ERIC Educational Resources Information Center

    Hert, Carol A.; Nilan, Michael S.

    1991-01-01

    Presents preliminary data that characterizes the relationship between what users say they are trying to accomplish when using an online public access catalog (OPAC) and their perceptions of what input to give the system. Human-machine interaction is discussed, and appropriate methods for evaluating information retrieval systems are considered. (18…

  20. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  1. Diverse applications of advanced man-telerobot interfaces

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas A.

    1991-01-01

    Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.

  2. Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface

    PubMed Central

    Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong

    2017-01-01

    There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485

  3. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  4. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  5. Future developments in brain-machine interface research.

    PubMed

    Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L

    2011-01-01

    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition.

  6. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.

    PubMed

    Eom, Hwisoo; Lee, Sang Hun

    2015-06-12

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.

  7. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles

    PubMed Central

    Eom, Hwisoo; Lee, Sang Hun

    2015-01-01

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model. PMID:26076406

  8. Human-machine interface hardware: The next decade

    NASA Technical Reports Server (NTRS)

    Marcus, Elizabeth A.

    1991-01-01

    In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.

  9. Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing

    PubMed Central

    Baek, Sung Hoon

    2017-01-01

    In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing (T-Wing), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing, we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude. PMID:28786942

  10. Embedded System for Prosthetic Control Using Implanted Neuromuscular Interfaces Accessed Via an Osseointegrated Implant.

    PubMed

    Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max

    2017-08-01

    Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.

  11. Design of Human-Machine Interface and altering of pelvic obliquity with RGR Trainer.

    PubMed

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2011-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system's ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking - in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. © 2011 IEEE

  12. Design of Human – Machine Interface and Altering of Pelvic Obliquity with RGR Trainer

    PubMed Central

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2012-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system’s ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking – in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. PMID:22275693

  13. Future developments in brain-machine interface research

    PubMed Central

    Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L

    2011-01-01

    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition. PMID:21779720

  14. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  15. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  16. Transfer of control system interface solutions from other domains to the thermal power industry.

    PubMed

    Bligård, L-O; Andersson, J; Osvalder, A-L

    2012-01-01

    In a thermal power plant the operators' roles are to control and monitor the process to achieve efficient and safe production. To achieve this, the human-machine interfaces have a central part. The interfaces need to be updated and upgraded together with the technical functionality to maintain optimal operation. One way of achieving relevant updates is to study other domains and see how they have solved similar issues in their design solutions. The purpose of this paper is to present how interface design solution ideas can be transferred from domains with operator control to thermal power plants. In the study 15 domains were compared using a model for categorisation of human-machine systems. The result from the domain comparison showed that nuclear power, refinery and ship engine control were most similar to thermal power control. From the findings a basic interface structure and three specific display solutions were proposed for thermal power control: process parameter overview, plant overview, and feed water view. The systematic comparison of the properties of a human-machine system allowed interface designers to find suitable objects, structures and navigation logics in a range of domains that could be transferred to the thermal power domain.

  17. [Human machines--mechanical humans? The industrial arrangement of the relation between human being and machine on the basis of psychotechnik and Georg Schlesingers work with disabled soldiers].

    PubMed

    Patzel-Mattern, Katja

    2005-01-01

    The 20th Century is the century of of technical artefacts. With their existance and use they create an artificial reality, within which humans have to position themselves. Psychotechnik is an attempt to enable humans for this positioning. It gained importance in Germany after World War I and had its heyday between 1919 and 1926. On the basis of the activity of the engineer and supporter of Psychotechnik Georg Schlesinger, whose particular interest were disabled soldiers, the essay on hand will investigate the understanding of the body and the human being of Psychotechnik as an applied science. It turned out, that the biggest achievement of Psychotechnik was to establish a new view of the relation between human being and machine. Thus it helped to show that the human-machine-interface is a shapable unit. Psychotechnik sees the human body and its physique as the last instance for the design of machines. Its main concern is to optimize the relation between human being and machine rather than to standardize human beings according to the construction of machines. After her splendid rise during the Weimar Republic and her rapid decline since the late 1920s Psychotechnik nowadays gains scientifical attention as a historical phenomenon. The main attention in the current discourse lies on the aspects conserning philosophy of science: the unity of body and soul, the understanding of the human-machine-interface as a shapable unit and the human being as a last instance of this unit.

  18. Man-machine interface for the control of a lunar transport machine

    NASA Technical Reports Server (NTRS)

    Ashley, Richard; Bacon, Loring; Carlton, Scott Tim; May, Mark; Moore, Jimmy; Peek, Dennis

    1987-01-01

    A proposed first generation human interface control panel is described which will be used to control SKITTER, a three-legged lunar walking machine. Under development at Georgia Tech, SKITTER will be a multi-purpose, un-manned vehicle capable of preparing a site for the proposed lunar base in advance of the arrival of men. This walking machine will be able to accept modular special purpose tools, such as a crane, a core sampling drill, and a digging device, among others. The project was concerned with the design of a human interface which could be used, from earth, to control the movements of SKITTER on the lunar surface. Preliminary inquiries were also made into necessary modifications required to adapt the panel to both a shirt-sleeve lunar environment and to a mobile unit which could be used by a man in a space suit at a lunar work site.

  19. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  20. Materials and optimized designs for human-machine interfaces via epidermal electronics.

    PubMed

    Jeong, Jae-Woong; Yeo, Woon-Hong; Akhtar, Aadeel; Norton, James J S; Kwack, Young-Jin; Li, Shuo; Jung, Sung-Young; Su, Yewang; Lee, Woosik; Xia, Jing; Cheng, Huanyu; Huang, Yonggang; Choi, Woon-Seop; Bretl, Timothy; Rogers, John A

    2013-12-17

    Thin, soft, and elastic electronics with physical properties well matched to the epidermis can be conformally and robustly integrated with the skin. Materials and optimized designs for such devices are presented for surface electromyography (sEMG). The findings enable sEMG from wide ranging areas of the body. The measurements have quality sufficient for advanced forms of human-machine interface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A Tool for Assessing the Text Legibility of Digital Human Machine Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    2015-08-01

    A tool intended to aid qualified professionals in the assessment of the legibility of text presented on a digital display is described. The assessment of legibility is primarily for the purposes of designing and analyzing human machine interfaces in accordance with NUREG-0700 and MIL-STD 1472G. The tool addresses shortcomings of existing guidelines by providing more accurate metrics of text legibility with greater sensitivity to design alternatives.

  2. Reverse-micelle-induced porous pressure-sensitive rubber for wearable human-machine interfaces.

    PubMed

    Jung, Sungmook; Kim, Ji Hoon; Kim, Jaemin; Choi, Suji; Lee, Jongsu; Park, Inhyuk; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2014-07-23

    A novel method to produce porous pressure-sensitive rubber is developed. For the controlled size distribution of embedded micropores, solution-based procedures using reverse micelles are adopted. The piezosensitivity of the pressure sensitive rubber is significantly increased by introducing micropores. Using this method, wearable human-machine interfaces are fabricated, which can be applied to the remote control of a robot. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  4. Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy

    DTIC Science & Technology

    1999-01-01

    communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human

  5. Tactual interfaces: The human perceiver

    NASA Technical Reports Server (NTRS)

    Srinivasan, M. A.

    1991-01-01

    Increasingly complex human-machine interactions, such as in teleoperation or in virtual environments, have necessitated the optimal use of the human tactual channel for information transfer. This need leads to a demand for a basic understanding of how the human tactual system works, so that the tactual interface between the human and the machine can receive the command signals from the human, as well as display the information to the human, in a manner that appears natural to the human. The tactual information consists of two components: (1) contact information which specifies the nature of direct contact with the object; and (2) kinesthetic information which refers to the position and motion of the limbs. This paper is mostly concerned with contact information.

  6. Using machine learning to emulate human hearing for predictive maintenance of equipment

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh; Bent, Graham

    2017-05-01

    At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.

  7. An evaluation of the ATM man/machine interface. Phase 3: Analysis of SL-3 and SL-4 data

    NASA Technical Reports Server (NTRS)

    Bathurst, J. R., Jr.; Pain, R. F.; Ludewig, D. B.

    1974-01-01

    The functional adequacy of human factored crew operated systems under operational zero-gravity conditions is considered. Skylab ATM experiment operations generated sufficient telemetry and voice transcript data to support such an assessment effort. Discussions are presented pertaining to the methodology and procedures used to evaluate the hardware, training and directive aspects of Skylab 3 and Skylab 4 manned ATM experiment operations.

  8. Study About Ceiling Design for Main Control Room of NPP with HFE

    NASA Astrophysics Data System (ADS)

    Gu, Pengfei; Ni, Ying; Chen, Weihua; Chen, Bo; Zhang, Jianbo; Liang, Huihui

    Recently since human factor engineering (HFE) has been used in control room design of nuclear power plant (NPP), the human-machine interface (HMI) has been gradual to develop harmoniously, especially the use of the digital technology. Comparing with the analog technology which was used to human-machine interface in the past, human-machine interaction has been more enhanced. HFE and the main control room (MCR) design engineering of NPP is a combination of multidisciplinary cross, mainly related to electrical and instrument control, reactor, machinery, systems engineering and management disciplines. However, MCR is not only equipped with HMI provided by the equipments, but also more important for the operator to provide a work environment, such as the main control room ceiling. The ceiling design of main control room related to HFE which influences the performance of staff should also be considered in the design of the environment and aesthetic factors, especially the introduction of professional design experience and evaluation method. Based on Ling Ao phase II and Hong Yanhe project implementation experience, the study analyzes lighting effect, space partition, vision load about the ceiling of main control room of NPP. Combining with the requirements of standards, the advantages and disadvantages of the main control room ceiling design has been discussed, and considering the requirements of lightweight, noise reduction, fire prevention, moisture protection, the ceiling design solution of the main control room also has been discussed.

  9. Design Guidelines and Criteria for User/Operator Transactions with Battlefield Automated Systems. Volume 5. Background Literature

    DTIC Science & Technology

    1981-02-01

    the machine . ARI’s efforts in this area focus on human perfor- mance problems related to interactions with command and control centers, and on issues...improvement of the user- machine interface. Lacking consistent design principles, current practice results in a fragmented and unsystematic approach to system...complexity in the user- machine interface of BAS, ARI supported this effort for develop- me:nt of an online language for Army tactical intelligence

  10. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  11. Human-machine interface (HMI) report for 241-SY-101 data acquisition [and control] system (DACS) upgrade study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truitt, R.W.

    1997-10-22

    This report provides an independent evaluation of information for a Windows based Human Machine Interface (HMI) to replace the existing DOS based Iconics HMI currently used in the Data Acquisition and Control System (DACS) used at Tank 241-SY-101. A fundamental reason for this evaluation is because of the difficulty of maintaining the system with obsolete, unsupported software. The DACS uses a software operator interface (Genesis for DOS HMI) that is no longer supported by its manufacturer, Iconics. In addition to its obsolescence, it is complex and difficult to train additional personnel on. The FY 1997 budget allocated $40K for phasemore » 1 of a software/hardware upgrade that would have allowed the old DOS based system to be replaced by a current Windows based system. Unfortunately, budget constraints during FY 1997 has prompted deferral of the upgrade. The upgrade needs to be performed at the earliest possible time, before other failures render the system useless. Once completed, the upgrade could alleviate other concerns: spare pump software may be able to be incorporated into the same software as the existing pump, thereby eliminating the parallel path dilemma; and the newer, less complex software should expedite training of future personnel, and in the process, require that less technical time be required to maintain the system.« less

  12. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  13. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  14. Adaptive displays and controllers using alternative feedback.

    PubMed

    Repperger, D W

    2004-12-01

    Investigations on the design of haptic (force reflecting joystick or force display) controllers were conducted by viewing the display of force information within the context of several different paradigms. First, using analogies from electrical and mechanical systems, certain schemes of the haptic interface were hypothesized which may improve the human-machine interaction with respect to various criteria. A discussion is given on how this interaction benefits the electrical and mechanical system. To generalize this concept to the design of human-machine interfaces, three studies with haptic mechanisms were then synthesized and analyzed.

  15. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    ERIC Educational Resources Information Center

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  16. Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.

    2010-01-01

    From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

  17. Biosleeve Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Assad, Christopher (Inventor)

    2016-01-01

    Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.

  18. Visualization tool for human-machine interface designers

    NASA Astrophysics Data System (ADS)

    Prevost, Michael P.; Banda, Carolyn P.

    1991-06-01

    As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.

  19. Assessment of Human Factors

    NASA Technical Reports Server (NTRS)

    Mount, Frances; Foley, Tico

    1999-01-01

    Human Factors Engineering, often referred to as Ergonomics, is a science that applies a detailed understanding of human characteristics, capabilities, and limitations to the design, evaluation, and operation of environments, tools, and systems for work and daily living. Human Factors is the investigation, design, and evaluation of equipment, techniques, procedures, facilities, and human interfaces, and encompasses all aspects of human activity from manual labor to mental processing and leisure time enjoyments. In spaceflight applications, human factors engineering seeks to: (1) ensure that a task can be accomplished, (2) maintain productivity during spaceflight, and (3) ensure the habitability of the pressurized living areas. DSO 904 served as a vehicle for the verification and elucidation of human factors principles and tools in the microgravity environment. Over six flights, twelve topics were investigated. This study documented the strengths and limitations of human operators in a complex, multifaceted, and unique environment. By focusing on the man-machine interface in space flight activities, it was determined which designs allow astronauts to be optimally productive during valuable and costly space flights. Among the most promising areas of inquiry were procedures, tools, habitat, environmental conditions, tasking, work load, flexibility, and individual control over work.

  20. Man-machine interface requirements - advanced technology

    NASA Technical Reports Server (NTRS)

    Remington, R. W.; Wiener, E. L.

    1984-01-01

    Research issues and areas are identified where increased understanding of the human operator and the interaction between the operator and the avionics could lead to improvements in the performance of current and proposed helicopters. Both current and advanced helicopter systems and avionics are considered. Areas critical to man-machine interface requirements include: (1) artificial intelligence; (2) visual displays; (3) voice technology; (4) cockpit integration; and (5) pilot work loads and performance.

  1. Skills based evaluation of alternative input methods to command a semi-autonomous electric wheelchair.

    PubMed

    Rojas, Mario; Ponce, Pedro; Molina, Arturo

    2016-08-01

    This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.

  2. Ultrasensitive and Highly Stable Resistive Pressure Sensors with Biomaterial-Incorporated Interfacial Layers for Wearable Health-Monitoring and Human-Machine Interfaces.

    PubMed

    Chang, Hochan; Kim, Sungwoong; Jin, Sumin; Lee, Seung-Woo; Yang, Gil-Tae; Lee, Ki-Young; Yi, Hyunjung

    2018-01-10

    Flexible piezoresistive sensors have huge potential for health monitoring, human-machine interfaces, prosthetic limbs, and intelligent robotics. A variety of nanomaterials and structural schemes have been proposed for realizing ultrasensitive flexible piezoresistive sensors. However, despite the success of recent efforts, high sensitivity within narrower pressure ranges and/or the challenging adhesion and stability issues still potentially limit their broad applications. Herein, we introduce a biomaterial-based scheme for the development of flexible pressure sensors that are ultrasensitive (resistance change by 5 orders) over a broad pressure range of 0.1-100 kPa, promptly responsive (20 ms), and yet highly stable. We show that employing biomaterial-incorporated conductive networks of single-walled carbon nanotubes as interfacial layers of contact-based resistive pressure sensors significantly enhances piezoresistive response via effective modulation of the interlayer resistance and provides stable interfaces for the pressure sensors. The developed flexible sensor is capable of real-time monitoring of wrist pulse waves under external medium pressure levels and providing pressure profiles applied by a thumb and a forefinger during object manipulation at a low voltage (1 V) and power consumption (<12 μW). This work provides a new insight into the material candidates and approaches for the development of wearable health-monitoring and human-machine interfaces.

  3. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  4. Best face forward.

    PubMed

    Rayport, Jeffrey F; Jaworski, Bernard J

    2004-12-01

    Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.

  5. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  6. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roger Lew; Ronald L. Boring; Thomas A. Ulrich

    Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.

  8. Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface

    PubMed Central

    De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara

    2011-01-01

    A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574

  9. Emotion detection from text

    NASA Astrophysics Data System (ADS)

    Ramalingam, V. V.; Pandian, A.; Jaiswal, Abhijeet; Bhatia, Nikhar

    2018-04-01

    This paper presents a novel method based on concept of Machine Learning for Emotion Detection using various algorithms of Support Vector Machine and major emotions described are linked to the Word-Net for enhanced accuracy. The approach proposed plays a promising role to augment the Artificial Intelligence in the near future and could be vital in optimization of Human-Machine Interface.

  10. Human factors evaluation of teletherapy: Function and task analysis. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaye, R.D.; Henriksen, K.; Jones, R.

    1995-07-01

    As a treatment methodology, teletherapy selectively destroys cancerous and other tissue by exposure to an external beam of ionizing radiation. Sources of radiation are either a radioactive isotope, typically Cobalt-60 (Co-60), or a linear accelerator. Records maintained by the NRC have identified instances of teletherapy misadministration where the delivered radiation dose has differed from the radiation prescription (e.g., instances where fractions were delivered to the wrong patient, to the wrong body part, or were too great or too little with respect to the defined treatment volume). Both human error and machine malfunction have led to misadministrations. Effective and safe treatmentmore » requires a concern for precision and consistency of human-human and human-machine interactions throughout the course of therapy. The present study is the first part of a series of human factors evaluations for identifying the root causes that lead to human error in the teletherapy environment. The human factors evaluations included: (1) a function and task analysis of teletherapy activities, (2) an evaluation of the human-system interfaces, (3) an evaluation of procedures used by teletherapy staff, (4) an evaluation of the training and qualifications of treatment staff (excluding the oncologists), (5) an evaluation of organizational practices and policies, and (6) an identification of problems and alternative approaches for NRC and industry attention. The present report addresses the function and task analysis of teletherapy activities and provides the foundation for the conduct of the subsequent evaluations. The report includes sections on background, methodology, a description of the function and task analysis, and use of the task analysis findings for the subsequent tasks. The function and task analysis data base also is included.« less

  11. What Do We Really Need? Visions of an Ideal Human-Machine Interface for NOTES Mechatronic Support Systems From the View of Surgeons, Gastroenterologists, and Medical Engineers.

    PubMed

    Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus

    2015-08-01

    To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.

  12. Soft brain-machine interfaces for assistive robotics: A novel control approach.

    PubMed

    Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash

    2017-07-01

    Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.

  13. A study of speech interfaces for the vehicle environment.

    DOT National Transportation Integrated Search

    2013-05-01

    Over the past few years, there has been a shift in automotive human machine interfaces from : visual-manual interactions (pushing buttons and rotating knobs) to speech interaction. In terms of : distraction, the industry views speech interaction as a...

  14. A Cognitive Systems Engineering Approach to Developing HMI Requirements for New Technologies

    NASA Technical Reports Server (NTRS)

    Fern, Lisa Carolynn

    2016-01-01

    This document examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will deployed into complex systems. A key question for new technologies, is how work will be accomplished by the human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by the designers. The human machine interface (HMI) which is intended to facilitate human-machine interaction and cooperation, however, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture, can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the expected performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements a detect and avoid system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned from a recent research effort in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to the complete absence of different approaches to human-automation cooperation. For example, all of the prototype technologies that were evaluated in the research program assumed a human-automation architecture that relied on serial processing from the automation to the human. While this type of human-automation architecture is typical across many different technologies and in many different domains, it ignores different architectures where humans and automation work in parallel. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed.

  15. Evaluating the Toxicity of Cigarette Whole Smoke Solutions in an Air-Liquid-Interface Human In Vitro Airway Tissue Model.

    PubMed

    Cao, Xuefei; Muskhelishvili, Levan; Latendresse, John; Richter, Patricia; Heflich, Robert H

    2017-03-01

    Exposure to cigarette smoke causes a multitude of pathological changes leading to tissue damage and disease. Quantifying such changes in highly differentiated in vitro human tissue models may assist in evaluating the toxicity of tobacco products. In this methods development study, well-differentiated human air-liquid-interface (ALI) in vitro airway tissue models were used to assess toxicological endpoints relevant to tobacco smoke exposure. Whole mainstream smoke solutions (WSSs) were prepared from 2 commercial cigarettes (R60 and S60) that differ in smoke constituents when machine-smoked under International Organization for Standardization conditions. The airway tissue models were exposed apically to WSSs 4-h per day for 1-5 days. Cytotoxicity, tissue barrier integrity, oxidative stress, mucin secretion, and matrix metalloproteinase (MMP) excretion were measured. The treatments were not cytotoxic and had marginal effects on tissue barrier properties; however, other endpoints responded in time- and dose-dependent manners, with the R60 resulting in higher levels of response than the S60 for many endpoints. Based on the lowest effect dose, differences in response to the WSSs were observed for mucin induction and MMP secretion. Mitigation of mucin induction by cotreatment of cultures with N-acetylcysteine suggests that oxidative stress contributes to mucus hypersecretion. Overall, these preliminary results suggest that quantifying disease-relevant endpoints using ALI airway models is a potential tool for tobacco product toxicity evaluation. Additional research using tobacco samples generated under smoking machine conditions that more closely approximate human smoking patterns will inform further methods development. Published by Oxford University Press on behalf of the Society of Toxicology 2017. This work is written by US Government employees and is in the public domain in the US.

  16. Human factors in space telepresence

    NASA Technical Reports Server (NTRS)

    Akin, D. L.; Howard, R. D.; Oliveria, J. S.

    1983-01-01

    The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing.

  17. PubMed Central

    Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele

    2016-01-01

    This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314

  18. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  19. Soft Material-Enabled, Flexible Hybrid Electronics for Medicine, Healthcare, and Human-Machine Interfaces

    PubMed Central

    Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon

    2018-01-01

    Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas. PMID:29364861

  20. Soft Material-Enabled, Flexible Hybrid Electronics for Medicine, Healthcare, and Human-Machine Interfaces.

    PubMed

    Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon; Yeo, Woon-Hong

    2018-01-24

    Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.

  1. Research interface on a programmable ultrasound scanner.

    PubMed

    Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin

    2008-07-01

    Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.

  2. Screen-Printed Washable Electronic Textiles as Self-Powered Touch/Gesture Tribo-Sensors for Intelligent Human-Machine Interaction.

    PubMed

    Cao, Ran; Pu, Xianjie; Du, Xinyu; Yang, Wei; Wang, Jiaona; Guo, Hengyu; Zhao, Shuyu; Yuan, Zuqing; Zhang, Chi; Li, Congju; Wang, Zhong Lin

    2018-05-22

    Multifunctional electronic textiles (E-textiles) with embedded electric circuits hold great application prospects for future wearable electronics. However, most E-textiles still have critical challenges, including air permeability, satisfactory washability, and mass fabrication. In this work, we fabricate a washable E-textile that addresses all of the concerns and shows its application as a self-powered triboelectric gesture textile for intelligent human-machine interfacing. Utilizing conductive carbon nanotubes (CNTs) and screen-printing technology, this kind of E-textile embraces high conductivity (0.2 kΩ/sq), high air permeability (88.2 mm/s), and can be manufactured on common fabric at large scales. Due to the advantage of the interaction between the CNTs and the fabrics, the electrode shows excellent stability under harsh mechanical deformation and even after being washed. Moreover, based on a single-electrode mode triboelectric nanogenerator and electrode pattern design, our E-textile exhibits highly sensitive touch/gesture sensing performance and has potential applications for human-machine interfacing.

  3. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  4. New generation emerging technologies for neurorehabilitation and motor assistance.

    PubMed

    Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele

    2016-12-01

    This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.

  5. Three-dimensional anthropometric techniques applied to the fabrication of burn masks and the quantification of wound healing

    NASA Astrophysics Data System (ADS)

    Whitestone, Jennifer J.; Geisen, Glen R.; McQuiston, Barbara K.

    1997-03-01

    Anthropometric surveys conducted by the military provide comprehensive human body measurement data that are human interface requirements for successful mission performance of weapon systems, including cockpits, protective equipment, and clothing. The application of human body dimensions to model humans and human-machine performance begins with engineering anthropometry. There are two critical elements to engineering anthropometry: data acquisition and data analysis. First, the human body is captured dimensionally with either traditional anthropometric tools, such as calipers and tape measures, or with advanced image acquisition systems, such as a laser scanner. Next, numerous statistical analysis tools, such as multivariate modeling and feature envelopes, are used to effectively transition these data for design and evaluation of equipment and work environments. Recently, Air Force technology transfer allowed researchers at the Computerized Anthropometric Research and Design (CARD) Laboratory at Wright-Patterson Air Force Base to work with the Dayton, Ohio area medical community in assessing the rate of wound healing and improving the fit of total contract burn masks. This paper describes the successful application of CARD Lab engineering anthropometry to two medically oriented human interface problems.

  6. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  7. Feasibility of task-specific brain-machine interface training for upper-extremity paralysis in patients with chronic hemiparetic stroke.

    PubMed

    Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen

    2018-01-10

    Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.

  8. Clinical Outcome of Hydroxyapatite Coated, Bioactive Glass Coated, and Machined Ti6Al4V Threaded Dental Implant in Human Jaws: A Short-Term Comparative Study.

    PubMed

    Mistry, Surajit; Roy, Rajiv; Kundu, Biswanath; Datta, Someswar; Kumar, Manoj; Chanda, Abhijit; Kundu, Debabrata

    2016-04-01

    Growing aspect of endosseous implant research is focused on surface modification of dental implants for the purpose of improving osseointegration. The aim of this study was to evaluate and compare the clinical outcome (ie, osseointegration) of hydroxyapatite coated, bioactive glass coated and machined titanium alloy threaded dental implants in human jaw bone after implantation. One hundred twenty-six implants (45 hydroxyapatite coated, 41 bioactive glass coated, and 40 machined titanium implants) have been placed in incisor areas of 62 adult patients. Outcome was assessed up to 12 months after prosthetic rehabilitation using different clinical and radiological parameters. Surface roughness of failed implants was analyzed by laser profilometer. Hydroxyapatite and bioactive glass coating materials were nontoxic and biocompatible. Least marginal bone loss in radiograph, significantly higher (P < 0.05) interface radiodensity, and less interfacial gaps were observed in computed tomography with bioactive glass coated implants at anterior maxilla compared to other 2 types. Bioactive glass coated implants are equally safe and effective as hydroxyapatite coated and machined titanium implants in achieving osseointegration; therefore, can be effectively used as an alternative coating material for dental implants.

  9. Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces

    NASA Astrophysics Data System (ADS)

    O'Connor, Timothy Francis, III

    Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.

  10. Histological Evaluation of a Chronically-implanted Electrocorticographic Electrode Grid in a Non-human Primate

    PubMed Central

    Degenhart, Alan D.; Eles, James; Dum, Richard; Mischel, Jessica L.; Smalianchuk, Ivan; Endler, Bridget; Ashmore, Robin C.; Tyler-Kabara, Elizabeth C.; Hatsopoulos, Nicholas G.; Wang, Wei; Batista, Aaron P.; Cui, X. Tracy

    2016-01-01

    Electrocorticography (ECoG), used as a neural recording modality for brain-machine interfaces (BMIs), potentially allows for field potentials to be recorded from the surface of the cerebral cortex for long durations without suffering the host-tissue reaction to the extent that it is common with intracortical microelectrodes. Though the stability of signals obtained from chronically-implanted ECoG electrodes has begun receiving attention, to date little work has characterized the effects of long-term implantation of ECoG electrodes on underlying cortical tissue. We implanted a high-density ECoG electrode grid subdurally over cortical motor areas of a Rhesus macaque for 666 days. Histological analysis revealed minimal damage to the cortex underneath the implant, though the grid itself was encapsulated in collagenous tissue. We observed macrophages and foreign body giant cells at the tissue-array interface, indicative of a stereotypical foreign body response. Despite this encapsulation, cortical modulation during reaching movements was observed more than 18 months post-implantation. These results suggest that ECoG may provide a means by which stable chronic cortical recordings can be obtained with comparatively little tissue damage, facilitating the development of clinically-viable brain-machine interface systems. PMID:27351722

  11. Multichannel noninvasive human-machine interface via stretchable µm thick sEMG patches for robot manipulation

    NASA Astrophysics Data System (ADS)

    Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn

    2018-01-01

    Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.

  12. Cursor control by Kalman filter with a non-invasive body–machine interface

    PubMed Central

    Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A

    2015-01-01

    Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561

  13. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems (Developpement d’Interfaces Homme-Machine Pour Appuyer la Confiance dans les Systemes Automatises d’Identification au Combat)

    DTIC Science & Technology

    2008-03-31

    on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study

  14. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  15. Intelligent Adaptive Interface: A Design Tool for Enhancing Human-Machine System Performances

    DTIC Science & Technology

    2009-10-01

    and customizable. Thus, an intelligent interface should tailor its parameters to certain prescribed specifications or convert itself and adjust to...Computer Interaction 3(2): 87-122. [51] Schereiber, G., Akkermans, H., Anjewierden, A., de Hoog , R., Shadbolt, N., Van de Velde, W., & Wielinga, W

  16. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  17. CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.

    We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human andmore » machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.« less

  18. Human factors issues in telerobotic systems for Space Station Freedom servicing

    NASA Technical Reports Server (NTRS)

    Malone, Thomas B.; Permenter, Kathryn E.

    1990-01-01

    Requirements for Space Station Freedom servicing are described and the state-of-the-art for telerobotic system on-orbit servicing of spacecraft is defined. The projected requirements for the Space Station Flight Telerobotic Servicer (FTS) are identified. Finally, the human factors issues in telerobotic servicing are discussed. The human factors issues are basically three: the definition of the role of the human versus automation in system control; the identification of operator-device interface design requirements; and the requirements for development of an operator-machine interface simulation capability.

  19. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  20. Human factors in technology replacement: a case study in interface design for a public transport monitoring system.

    PubMed

    Harper, J G; Fuller, R; Sweeney, D; Waldmann, T

    1998-04-01

    This paper describes ergonomic issues raised during a project to provide a replacement real-time bus route control system to a large public transport company. Task and system analyses highlighted several deficiencies in the original system architecture, the human-machine interfaces and the general approach to system management. The eventual live prototype replaced the existing original system for a trial evaluation period of several weeks. During this period a number of studies was conducted with the system users in order to measure any improvements the new system, with its ergonomic features, produced over the old. Importantly, the results confirmed that (a) general responsiveness and service quality were improved, and (b) users were more comfortable with the new design. We conclude with a number of caveats which we believe will be useful to any group addressing technology impact in a large organisation.

  1. Human Machine Interfaces for Teleoperators and Virtual Environments: Conference Held in Santa Barbara, California on 4-9 March 1990.

    DTIC Science & Technology

    1990-03-01

    decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal

  2. The remapping of space in motor learning and human-machine interfaces

    PubMed Central

    Mussa-Ivaldi, F.A.; Danziger, Z.

    2009-01-01

    Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553

  3. A vibro-haptic human-machine interface for structural health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mascarenas, David; Plont, Crystal; Brown, Christina

    The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less

  4. A vibro-haptic human-machine interface for structural health monitoring

    DOE PAGES

    Mascarenas, David; Plont, Crystal; Brown, Christina; ...

    2014-11-01

    The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less

  5. The Human Behaviour-Change Project: harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation.

    PubMed

    Michie, Susan; Thomas, James; Johnston, Marie; Aonghusa, Pol Mac; Shawe-Taylor, John; Kelly, Michael P; Deleris, Léa A; Finnerty, Ailbhe N; Marques, Marta M; Norris, Emma; O'Mara-Eves, Alison; West, Robert

    2017-10-18

    Behaviour change is key to addressing both the challenges facing human health and wellbeing and to promoting the uptake of research findings in health policy and practice. We need to make better use of the vast amount of accumulating evidence from behaviour change intervention (BCI) evaluations and promote the uptake of that evidence into a wide range of contexts. The scale and complexity of the task of synthesising and interpreting this evidence, and increasing evidence timeliness and accessibility, will require increased computer support. The Human Behaviour-Change Project (HBCP) will use Artificial Intelligence and Machine Learning to (i) develop and evaluate a 'Knowledge System' that automatically extracts, synthesises and interprets findings from BCI evaluation reports to generate new insights about behaviour change and improve prediction of intervention effectiveness and (ii) allow users, such as practitioners, policy makers and researchers, to easily and efficiently query the system to get answers to variants of the question 'What works, compared with what, how well, with what exposure, with what behaviours (for how long), for whom, in what settings and why?'. The HBCP will: a) develop an ontology of BCI evaluations and their reports linking effect sizes for given target behaviours with intervention content and delivery and mechanisms of action, as moderated by exposure, populations and settings; b) develop and train an automated feature extraction system to annotate BCI evaluation reports using this ontology; c) develop and train machine learning and reasoning algorithms to use the annotated BCI evaluation reports to predict effect sizes for particular combinations of behaviours, interventions, populations and settings; d) build user and machine interfaces for interrogating and updating the knowledge base; and e) evaluate all the above in terms of performance and utility. The HBCP aims to revolutionise our ability to synthesise, interpret and deliver evidence on behaviour change interventions that is up-to-date and tailored to user need and context. This will enhance the usefulness, and support the implementation of, that evidence.

  6. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  7. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  8. Brain-machine interfacing control of whole-body humanoid motion

    PubMed Central

    Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun

    2014-01-01

    We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134

  9. Digital Systems Validation Handbook. Volume 2. Chapter 19. Pilot - Vehicle Interface

    DTIC Science & Technology

    1993-11-01

    checklists, and other status messages. Voice interactive systems are defi-ed as "the interface between a cooperative human and a machine, which involv -he...Pilot-Vehicle Interface 19-85 5.6.1 Crew Interaction and the Cockpit 19-85 5.6.2 Crew Resource Management and Safety 19-87 5.6.3 Pilot and Crew Training...systems was a "stand-alone" component performing its intended function. Systems and their cockpit interfaces were added as technological advances were

  10. Stretchable, Transparent, Ultrasensitive, and Patchable Strain Sensor for Human-Machine Interfaces Comprising a Nanohybrid of Carbon Nanotubes and Conductive Elastomers.

    PubMed

    Roh, Eun; Hwang, Byeong-Ung; Kim, Doil; Kim, Bo-Yeong; Lee, Nae-Eung

    2015-06-23

    Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.

  11. Simulation of the «COSMONAUT-ROBOT» System Interaction on the Lunar Surface Based on Methods of Machine Vision and Computer Graphics

    NASA Astrophysics Data System (ADS)

    Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.

    2017-05-01

    Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.

  12. FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN

    NASA Astrophysics Data System (ADS)

    Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando

    2014-06-01

    The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.

  13. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  14. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  15. Software Engineering for User Interfaces. Technical Report.

    ERIC Educational Resources Information Center

    Draper, Stephen W.; Norman, Donald A.

    The discipline of software engineering can be extended in a natural way to deal with the issues raised by a systematic approach to the design of human-machine interfaces. The user should be treated as part of the system being designed and projects should be organized to take into account the current lack of a priori knowledge of user interface…

  16. State transition storyboards: A tool for designing the Goldstone solar system radar data acquisition system user interface software

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1987-01-01

    Effective user interface design in software systems is a complex task that takes place without adequate modeling tools. By combining state transition diagrams and the storyboard technique of filmmakers, State Transition Storyboards were developed to provide a detailed modeling technique for the Goldstone Solar System Radar Data Acquisition System human-machine interface. Illustrations are included with a description of the modeling technique.

  17. Man-machine interface issues in space telerobotics: A JPL research and development program

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1987-01-01

    Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.

  18. Hands-free human-machine interaction with voice

    NASA Astrophysics Data System (ADS)

    Juang, B. H.

    2004-05-01

    Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.

  19. Considerations for human-machine interfaces in tele-operations

    NASA Technical Reports Server (NTRS)

    Newport, Curt

    1991-01-01

    Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.

  20. Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.

    PubMed

    Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen

    2012-01-01

    An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Quadcopter control using a BCI

    NASA Astrophysics Data System (ADS)

    Rosca, S.; Leba, M.; Ionica, A.; Gamulescu, O.

    2018-01-01

    The paper presents how there can be interconnected two ubiquitous elements nowadays. On one hand, the drones, which are increasingly present and integrated into more and more fields of activity, beyond the military applications they come from, moving towards entertainment, real-estate, delivery and so on. On the other hand, unconventional man-machine interfaces, which are generous topics to explore now and in the future. Of these, we chose brain computer interface (BCI), which allows human-machine interaction without requiring any moving elements. The research consists of mathematical modeling and numerical simulation of a drone and a BCI. Then there is presented an application using a Parrot mini-drone and an Emotiv Insight BCI.

  2. Techno-Human Mesh: The Growing Power of Information Technologies.

    ERIC Educational Resources Information Center

    West, Cynthia K.

    This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…

  3. A preliminary study of MR sickness evaluation using visual motion aftereffect for advanced driver assistance systems.

    PubMed

    Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru

    2007-01-01

    Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.

  4. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  5. On the applicability of brain reading for predictive human-machine interfaces in robotics.

    PubMed

    Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred

    2013-01-01

    The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.

  6. On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics

    PubMed Central

    Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred

    2013-01-01

    The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125

  7. A novel asynchronous access method with binary interfaces

    PubMed Central

    2008-01-01

    Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797

  8. Interface design in the process industries

    NASA Technical Reports Server (NTRS)

    Beaverstock, M. C.; Stassen, H. G.; Williamson, R. A.

    1977-01-01

    Every operator runs his plant in accord with his own mental model of the process. In this sense, one characteristic of an ideal man-machine interface is that it be in harmony with that model. With this theme in mind, the paper first reviews the functions of the process operator and compares them with human operators involved in control situations previously studied outside the industrial environment (pilots, air traffic controllers, helmsmen, etc.). A brief history of the operator interface in the process industry and the traditional methodology employed in its design is then presented. Finally, a much more fundamental approach utilizing a model definition of the human operator's behavior is presented.

  9. Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.

    PubMed

    Durandau, Guillaume; Farina, Dario; Sartori, Massimo

    2018-03-01

    Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

  10. Analysis of a display and control system man-machine interface concept. Volume 1: Final technical report

    NASA Technical Reports Server (NTRS)

    Karl, D. R.

    1972-01-01

    An evaluation was made of the feasibility of utilizing a simplified man machine interface concept to manage and control a complex space system involving multiple redundant computers that control multiple redundant subsystems. The concept involves the use of a CRT for display and a simple keyboard for control, with a tree-type control logic for accessing and controlling mission, systems, and subsystem elements. The concept was evaluated in terms of the Phase B space shuttle orbiter, to utilize the wide scope of data management and subsystem control inherent in the central data management subsystem provided by the Phase B design philosophy. Results of these investigations are reported in four volumes.

  11. Advanced Aircraft Interfaces: The Machine Side of the Man-Machine Interface (Les Interfaces sur les Avions de Pointe: L’Aspect Machine de l’Interface Homme-Machine)

    DTIC Science & Technology

    1992-10-01

    Manager , Advanced Transport Operating Systems Program Office Langley Research Center Mail Stop 265 Hampton, VA 23665-5225 United States Programme Committee...J.H.Lind, and C.G.Burge Advanced Cockpit - Mission and Image Management 4 by J. Struck Aircrew Acceptance of Automation in the Cockpit 5 by M. Hicks and I...DESIGN CONCEPTS AND TOOLS A Systems Approach to the Advanced Aircraft Man-Machine Interface 23 by F. Armogida Management of Avionics Data in the Cockpit

  12. Structure design of lower limb exoskeletons for gait training

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Ziqiang; Tao, Chunjing; Ji, Run

    2015-09-01

    Due to the close physical interaction between human and machine in process of gait training, lower limb exoskeletons should be safe, comfortable and able to smoothly transfer desired driving force/moments to the patients. Correlatively, in kinematics the exoskeletons are required to be compatible with human lower limbs and thereby to avoid the uncontrollable interactional loads at the human-machine interfaces. Such requirement makes the structure design of exoskeletons very difficult because the human-machine closed chains are complicated. In addition, both the axis misalignments and the kinematic character difference between the exoskeleton and human joints should be taken into account. By analyzing the DOF(degree of freedom) of the whole human-machine closed chain, the human-machine kinematic incompatibility of lower limb exoskeletons is studied. An effective method for the structure design of lower limb exoskeletons, which are kinematically compatible with human lower limb, is proposed. Applying this method, the structure synthesis of the lower limb exoskeletons containing only one-DOF revolute and prismatic joints is investigated; the feasible basic structures of exoskeletons are developed and classified into three different categories. With the consideration of quasi-anthropopathic feature, structural simplicity and wearable comfort of lower limb exoskeletons, a joint replacement and structure comparison based approach to select the ideal structures of lower limb exoskeletons is proposed, by which three optimal exoskeleton structures are obtained. This paper indicates that the human-machine closed chain formed by the exoskeleton and human lower limb should be an even-constrained kinematic system in order to avoid the uncontrollable human-machine interactional loads. The presented method for the structure design of lower limb exoskeletons is universal and simple, and hence can be applied to other kinds of wearable exoskeletons.

  13. Integration Telegram Bot on E-Complaint Applications in College

    NASA Astrophysics Data System (ADS)

    Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.

  14. Mental State Assessment and Validation Using Personalized Physiological Biometrics

    PubMed Central

    Patel, Aashish N.; Howard, Michael D.; Roach, Shane M.; Jones, Aaron P.; Bryant, Natalie B.; Robinson, Charles S. H.; Clark, Vincent P.; Pilly, Praveen K.

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k-fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  15. Mental State Assessment and Validation Using Personalized Physiological Biometrics.

    PubMed

    Patel, Aashish N; Howard, Michael D; Roach, Shane M; Jones, Aaron P; Bryant, Natalie B; Robinson, Charles S H; Clark, Vincent P; Pilly, Praveen K

    2018-01-01

    Mental state monitoring is a critical component of current and future human-machine interfaces, including semi-autonomous driving and flying, air traffic control, decision aids, training systems, and will soon be integrated into ubiquitous products like cell phones and laptops. Current mental state assessment approaches supply quantitative measures, but their only frame of reference is generic population-level ranges. What is needed are physiological biometrics that are validated in the context of task performance of individuals. Using curated intake experiments, we are able to generate personalized models of three key biometrics as useful indicators of mental state; namely, mental fatigue, stress, and attention. We demonstrate improvements to existing approaches through the introduction of new features. Furthermore, addressing the current limitations in assessing the efficacy of biometrics for individual subjects, we propose and employ a multi-level validation scheme for the biometric models by means of k -fold cross-validation for discrete classification and regression testing for continuous prediction. The paper not only provides a unified pipeline for extracting a comprehensive mental state evaluation from a parsimonious set of sensors (only EEG and ECG), but also demonstrates the use of validation techniques in the absence of empirical data. Furthermore, as an example of the application of these models to novel situations, we evaluate the significance of correlations of personalized biometrics to the dynamic fluctuations of accuracy and reaction time on an unrelated threat detection task using a permutation test. Our results provide a path toward integrating biometrics into augmented human-machine interfaces in a judicious way that can help to maximize task performance.

  16. Charting the energy landscape of metal/organic interfaces via machine learning

    NASA Astrophysics Data System (ADS)

    Scherbela, Michael; Hörmann, Lukas; Jeindl, Andreas; Obersteiner, Veronika; Hofmann, Oliver T.

    2018-04-01

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. In this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. We demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  17. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE PAGES

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas; ...

    2018-04-17

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  18. Charting the energy landscape of metal/organic interfaces via machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas

    The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.

  19. Virtual hospital--a computer-aided platform to evaluate the sense of direction.

    PubMed

    Jiang, Ching-Fen; Li, Yuan-Shyi

    2007-01-01

    This paper presents a computer-aided platform, named Virtual Hospital (VH), to evaluate the wayfinding ability that is found impaired in senile people with early dementia. The development of the VH takes the advantage of virtual reality technology to make the evaluation of the sense of direction more convenient and accurate then the conventional way. A pilot study was carried out to test its feasibility in differentiating the sense of direction between different genders. The results with significant differences in the response time (p<0.05) and the pointing error (p<0.01) between genders suggest the potential of the VH for clinical uses. Further improvement on the human-machine interface is necessary to make it easy for geriatric people to use.

  20. Research in image management and access

    NASA Technical Reports Server (NTRS)

    Vondran, Raymond F.; Barron, Billy J.

    1993-01-01

    Presently, the problem of over-all library system design has been compounded by the accretion of both function and structure to a basic framework of requirements. While more device power has led to increased functionality, opportunities for reducing system complexity at the user interface level have not always been pursued with equal zeal. The purpose of this book is therefore to set forth and examine these opportunities, within the general framework of human factors research in man-machine interfaces. Human factors may be viewed as a series of trade-off decisions among four polarized objectives: machine resources and user specifications; functionality and user requirements. In the past, a limiting factor was the availability of systems. However, in the last two years, over one hundred libraries supported by many different software configurations have been added to the Internet. This document includes a statistical analysis of human responses to five Internet library systems by key features, development of the ideal online catalog system, and ideal online catalog systems for libraries and information centers.

  1. Triboelectrification based motion sensor for human-machine interfacing.

    PubMed

    Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin

    2014-05-28

    We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.

  2. Operant conditioning of a multiple degree-of-freedom brain-machine interface in a primate model of amputation.

    PubMed

    Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas

    2013-01-01

    Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.

  3. Tensile and bending fatigue of the adhesive interface to dentin.

    PubMed

    Belli, Renan; Baratieri, Luiz Narciso; Braem, Marc; Petschelt, Anselm; Lohbauer, Ulrich

    2010-12-01

    The aim of this study was to evaluate the fatigue limits of the dentin-composite interfaces established either with an etch-and-rinse or an one-step self-etch adhesive systems under tensile and bending configurations. Flat specimens (1.2 mm×5 mm×35 mm) were prepared using a plexiglass mold where dentin sections from human third molars were bonded to a resin composite, exhibiting the interface centrally located. Syntac Classic and G-Bond were used as adhesives and applied according to the manufacturer's instructions. The fluorochrome Rhodamine B was added to the adhesives to allow for fractographic evaluation. Tensile strength was measured in an universal testing machine and the bending strength (n=15) in a Flex machine (Flex, University of Antwerp, Belgium), respectively. Tensile (TFL) and bending fatigue limits (BFL) (n=25) were determined under wet conditions for 10(4) cycles following a staircase approach. Interface morphology and fracture mechanisms were observed using light, confocal laser scanning and scanning electron microscopy. Statistical analysis was performed using three-way ANOVA (mod LSD test, p<0.05). Tensile and bending characteristic strengths at 63.2% failure probability for Syntac were 23.8 MPa and 71.5 MPa, and 24.7 MPa and 72.3 MPa for G-Bond, respectively. Regarding the applied methods, no significant differences were detected between adhesives. However, fatigue limits for G-Bond (TFL=5.9 MPa; BFL=36.2 MPa) were significantly reduced when compared to Syntac (TFL=12.6 MPa; BFL=49.7 MPa). Fracture modes of Syntac were generally of adhesive nature, between the adhesive resin and dentin, while G-Bond showed fracture planes involving the adhesive-dentin interface and the adhesive resin. Cyclic loading under tensile and bending configurations led to a significant strength degradation, with a more pronounced fatigue limit decrease for G-Bond. The greater decrease in fracture strength was observed in the tensile configuration. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  4. Micro-patterned graphene-based sensing skins for human physiological monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik

    2018-03-01

    Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.

  5. Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.

    2001-01-01

    The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.

  6. Speech emotion recognition methods: A literature review

    NASA Astrophysics Data System (ADS)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  7. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  8. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  9. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  10. 49 CFR 236.1013 - PTC Development Plan and Notice of Product Intent content requirements and Type Approval.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...

  11. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    NASA Astrophysics Data System (ADS)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  12. Steering a Tractor by Means of an EMG-Based Human-Machine Interface

    PubMed Central

    Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio

    2011-01-01

    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering. PMID:22164006

  13. Steering a tractor by means of an EMG-based human-machine interface.

    PubMed

    Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio

    2011-01-01

    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver's scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering.

  14. Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends.

    PubMed

    Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Sciaraffa, Nicolina; Colosimo, Alfredo; Babiloni, Fabio

    2017-07-01

    This minireview aims to highlight recent important aspects to consider and evaluate when passive brain-computer interface (pBCI) systems would be developed and used in operational environments, and remarks future directions of their applications. Electroencephalography (EEG) based pBCI has become an important tool for real-time analysis of brain activity since it could potentially provide covertly-without distracting the user from the main task-and objectively-not affected by the subjective judgment of an observer or the user itself-information about the operator cognitive state. Different examples of pBCI applications in operational environments and new adaptive interface solutions have been presented and described. In addition, a general overview regarding the correct use of machine learning techniques (e.g., which algorithm to use, common pitfalls to avoid, etc.) in the pBCI field has been provided. Despite recent innovations on algorithms and neurotechnology, pBCI systems are not completely ready to enter the market yet, mainly due to limitations of the EEG electrodes technology, and algorithms reliability and capability in real settings. High complexity and safety critical systems (e.g., airplanes, ATM interfaces) should adapt their behaviors and functionality accordingly to the user' actual mental state. Thus, technologies (i.e., pBCIs) able to measure in real time the user's mental states would result very useful in such "high risk" environments to enhance human machine interaction, and so increase the overall safety.

  15. Human factors model concerning the man-machine interface of mining crewstations

    NASA Technical Reports Server (NTRS)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  16. Human Factors in Accidents Involving Remotely Piloted Aircraft

    NASA Technical Reports Server (NTRS)

    Merlin, Peter William

    2013-01-01

    This presentation examines human factors that contribute to RPA mishaps and provides analysis of lessons learned. RPA accident data from U.S. military and government agencies were reviewed and analyzed to identify human factors issues. Common contributors to RPA mishaps fell into several major categories: cognitive factors (pilot workload), physiological factors (fatigue and stress), environmental factors (situational awareness), staffing factors (training and crew coordination), and design factors (human machine interface).

  17. Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.

    PubMed

    Aromaa, Susanna; Väänänen, Kaisa

    2016-09-01

    In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Embedded Control System for Smart Walking Assistance Device.

    PubMed

    Bosnak, Matevz; Skrjanc, Igor

    2017-03-01

    This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.

  19. A reductionist approach to the analysis of learning in brain-computer interfaces.

    PubMed

    Danziger, Zachary

    2014-04-01

    The complexity and scale of brain-computer interface (BCI) studies limit our ability to investigate how humans learn to use BCI systems. It also limits our capacity to develop adaptive algorithms needed to assist users with their control. Adaptive algorithm development is forced offline and typically uses static data sets. But this is a poor substitute for the online, dynamic environment where algorithms are ultimately deployed and interact with an adapting user. This work evaluates a paradigm that simulates the control problem faced by human subjects when controlling a BCI, but which avoids the many complications associated with full-scale BCI studies. Biological learners can be studied in a reductionist way as they solve BCI-like control problems, and machine learning algorithms can be developed and tested in closed loop with the subjects before being translated to full BCIs. The method is to map 19 joint angles of the hand (representing neural signals) to the position of a 2D cursor which must be piloted to displayed targets (a typical BCI task). An investigation is presented on how closely the joint angle method emulates BCI systems; a novel learning algorithm is evaluated, and a performance difference between genders is discussed.

  20. Evaluation of an Integrated Multi-Task Machine Learning System with Humans in the Loop

    DTIC Science & Technology

    2007-01-01

    machine learning components natural language processing, and optimization...was examined with a test explicitly developed to measure the impact of integrated machine learning when used by a human user in a real world setting...study revealed that integrated machine learning does produce a positive impact on overall performance. This paper also discusses how specific machine learning components contributed to human-system

  1. Matching brain-machine interface performance to space applications.

    PubMed

    Citi, Luca; Tonet, Oliver; Marinelli, Martina

    2009-01-01

    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.

  2. Automation's Effect on Library Personnel.

    ERIC Educational Resources Information Center

    Dakshinamurti, Ganga

    1985-01-01

    Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…

  3. Humans and machines in space: The vision, the challenge, the payoff; AAS Goddard Memorial Symposium, 29th, Washington, DC, March 14-15, 1991

    NASA Astrophysics Data System (ADS)

    Johnson, Bradley; May, Gayle L.; Korn, Paula

    A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.

  4. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  5. Generating a Reduced Gravity Environment on Earth

    NASA Technical Reports Server (NTRS)

    Dungan, Larry K.; Cunningham, Tom; Poncia, Dina

    2010-01-01

    Since the 1950s several reduced gravity simulators have been designed and utilized in preparing humans for spaceflight and in reduced gravity system development. The Active Response Gravity Offload System (ARGOS) is the newest and most realistic gravity offload simulator. ARGOS provides three degrees of motion within the test area and is scalable for full building deployment. The inertia of the overhead system is eliminated by an active motor and control system. This presentation will discuss what ARGOS is, how it functions, and the unique challenges of interfacing to the human. Test data and video for human and robotic systems will be presented. A major variable in the human machine interaction is the interface of ARGOS to the human. These challenges along with design solutions will be discussed.

  6. Interactome INSIDER: a structural interactome browser for genomic studies.

    PubMed

    Meyer, Michael J; Beltrán, Juan Felipe; Liang, Siqi; Fragoza, Robert; Rumack, Aaron; Liang, Jin; Wei, Xiaomu; Yu, Haiyuan

    2018-01-01

    We present Interactome INSIDER, a tool to link genomic variant information with structural protein-protein interactomes. Underlying this tool is the application of machine learning to predict protein interaction interfaces for 185,957 protein interactions with previously unresolved interfaces in human and seven model organisms, including the entire experimentally determined human binary interactome. Predicted interfaces exhibit functional properties similar to those of known interfaces, including enrichment for disease mutations and recurrent cancer mutations. Through 2,164 de novo mutagenesis experiments, we show that mutations of predicted and known interface residues disrupt interactions at a similar rate and much more frequently than mutations outside of predicted interfaces. To spur functional genomic studies, Interactome INSIDER (http://interactomeinsider.yulab.org) enables users to identify whether variants or disease mutations are enriched in known and predicted interaction interfaces at various resolutions. Users may explore known population variants, disease mutations, and somatic cancer mutations, or they may upload their own set of mutations for this purpose.

  7. Learning algorithms for human-machine interfaces.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2009-05-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.

  8. Man-machine interfaces in LACIE/ERIPS

    NASA Technical Reports Server (NTRS)

    Duprey, B. B. (Principal Investigator)

    1979-01-01

    One of the most important aspects of the interactive portion of the LACIE/ERIPS software system is the way in which the analysis and decision-making capabilities of a human being are integrated with the speed and accuracy of a computer to produce a powerful analysis system. The three major man-machine interfaces in the system are (1) the use of menus for communications between the software and the interactive user; (2) the checkpoint/restart facility to recreate in one job the internal environment achieved in an earlier one; and (3) the error recovery capability which would normally cause job termination. This interactive system, which executes on an IBM 360/75 mainframe, was adapted for use in noninteractive (batch) mode. A case study is presented to show how the interfaces work in practice by defining some fields based on an image screen display, noting the field definitions, and obtaining a film product of the classification map.

  9. Advanced warfighter machine interface (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Franks, Erin

    2005-05-01

    Future military crewmen may have more individual and shared tasks to complete throughout a mission as a result of smaller crew sizes and an increased number of technology interactions. To maintain reasonable workload levels, the Warfighter Machine Interface (WMI) must provide information in a consistent, logical manner, tailored to the environment in which the soldier will be completing their mission. This paper addresses design criteria for creating an advanced, multi-modal warfighter machine interface for on-the-move mounted operations. The Vetronics Technology Integration (VTI) WMI currently provides capabilities such as mission planning and rehearsal, voice and data communications, and manned/unmanned vehicle payload and mobility control. A history of the crewstation and more importantly, the WMI software will be provided with an overview of requirements and criteria used for completing the design. Multiple phases of field and laboratory testing provide the opportunity to evaluate the design and hardware in stationary and motion environments. Lessons learned related to system usability and user performance are presented with mitigation strategies to be tested in the future.

  10. The Body-Machine Interface: A new perspective on an old theme

    PubMed Central

    Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465

  11. Symposium on Aviation Psychology, 1st, Ohio State University, Columbus, OH, April 21, 22, 1981, Proceedings

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The impact of modern technology on the role, responsibility, authority, and performance of human operators in modern aircraft and ATC systems was examined in terms of principles defined by Paul Fitts. Research into human factors in aircraft operations and the use of human factors engineering for aircraft safety improvements were discussed, and features of the man-machine interface in computerized cockpit warning systems are examined. The design and operational features of computerized avionics displays and HUDs are described, along with results of investigations into pilot decision-making behavior, aircrew procedural compliance, and aircrew judgment training programs. Experiments in vision and visual perception are detailed, as are behavioral studies of crew workload, coordination, and complement. The effectiveness of pilot selection, screening, and training techniques are assessed, as are methods for evaluating pilot performance.

  12. A Concept for Optimizing Behavioural Effectiveness & Efficiency

    NASA Astrophysics Data System (ADS)

    Barca, Jan Carlo; Rumantir, Grace; Li, Raymond

    Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.

  13. Identifying well-formed biomedical phrases in MEDLINE® text.

    PubMed

    Kim, Won; Yeganova, Lana; Comeau, Donald C; Wilbur, W John

    2012-12-01

    In the modern world people frequently interact with retrieval systems to satisfy their information needs. Humanly understandable well-formed phrases represent a crucial interface between humans and the web, and the ability to index and search with such phrases is beneficial for human-web interactions. In this paper we consider the problem of identifying humanly understandable, well formed, and high quality biomedical phrases in MEDLINE documents. The main approaches used previously for detecting such phrases are syntactic, statistical, and a hybrid approach combining these two. In this paper we propose a supervised learning approach for identifying high quality phrases. First we obtain a set of known well-formed useful phrases from an existing source and label these phrases as positive. We then extract from MEDLINE a large set of multiword strings that do not contain stop words or punctuation. We believe this unlabeled set contains many well-formed phrases. Our goal is to identify these additional high quality phrases. We examine various feature combinations and several machine learning strategies designed to solve this problem. A proper choice of machine learning methods and features identifies in the large collection strings that are likely to be high quality phrases. We evaluate our approach by making human judgments on multiword strings extracted from MEDLINE using our methods. We find that over 85% of such extracted phrase candidates are humanly judged to be of high quality. Published by Elsevier Inc.

  14. Neurosurgery and the dawning age of Brain-Machine Interfaces

    PubMed Central

    Rowland, Nathan C.; Breshears, Jonathan; Chang, Edward F.

    2013-01-01

    Brain–machine interfaces (BMIs) are on the horizon for clinical neurosurgery. Electrocorticography-based platforms are less invasive than implanted microelectrodes, however, the latter are unmatched in their ability to achieve fine motor control of a robotic prosthesis capable of natural human behaviors. These technologies will be crucial to restoring neural function to a large population of patients with severe neurologic impairment – including those with spinal cord injury, stroke, limb amputation, and disabling neuromuscular disorders such as amyotrophic lateral sclerosis. On the opposite end of the spectrum are neural enhancement technologies for specialized applications such as combat. An ongoing ethical dialogue is imminent as we prepare for BMI platforms to enter the neurosurgical realm of clinical management. PMID:23653884

  15. Conductive fiber-based ultrasensitive textile pressure sensor for wearable electronics.

    PubMed

    Lee, Jaehong; Kwon, Hyukho; Seo, Jungmok; Shin, Sera; Koo, Ja Hoon; Pang, Changhyun; Son, Seungbae; Kim, Jae Hyung; Jang, Yong Hoon; Kim, Dae Eun; Lee, Taeyoon

    2015-04-17

    A flexible and sensitive textile-based pressure sensor is developed using highly conductive fibers coated with dielectric rubber materials. The pressure sensor exhibits superior sensitivity, very fast response time, and high stability, compared with previous textile-based pressure sensors. By using a weaving method, the pressure sensor can be applied to make smart gloves and clothes that can control machines wirelessly as human-machine interfaces. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Software and Human-Machine Interface Development for Environmental Controls Subsystem Support

    NASA Technical Reports Server (NTRS)

    Dobson, Matthew

    2018-01-01

    The Space Launch System (SLS) is the next premier launch vehicle for NASA. It is the next stage of manned space exploration from American soil, and will be the platform in which we push further beyond Earth orbit. In preparation of the SLS maiden voyage on Exploration Mission 1 (EM-1), the existing ground support architecture at Kennedy Space Center required significant overhaul and updating. A comprehensive upgrade of controls systems was necessary, including programmable logic controller software, as well as Launch Control Center (LCC) firing room and local launch pad displays for technician use. Environmental control acts as an integral component in these systems, being the foremost system for conditioning the pad and extremely sensitive launch vehicle until T-0. The Environmental Controls Subsystem (ECS) required testing and modification to meet the requirements of the designed system, as well as the human factors requirements of NASA software for Validation and Verification (V&V). This term saw significant strides in the progress and functionality of the human-machine interfaces used at the launch pad, and improved integration with the controller code.

  17. Gesture-controlled interfaces for self-service machines and other applications

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  18. Body-Machine Interfaces after Spinal Cord Injury: Rehabilitation and Brain Plasticity.

    PubMed

    Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B; Wang, Xue; Parrish, Todd; Mussa-Ivaldi, Ferdinando A

    2016-12-19

    The purpose of this study was to identify rehabilitative effects and changes in white matter microstructure in people with high-level spinal cord injury following bilateral upper-extremity motor skill training. Five subjects with high-level (C5-C6) spinal cord injury (SCI) performed five visuo-spatial motor training tasks over 12 sessions (2-3 sessions per week). Subjects controlled a two-dimensional cursor with bilateral simultaneous movements of the shoulders using a non-invasive inertial measurement unit-based body-machine interface. Subjects' upper-body ability was evaluated before the start, in the middle and a day after the completion of training. MR imaging data were acquired before the start and within two days of the completion of training. Subjects learned to use upper-body movements that survived the injury to control the body-machine interface and improved their performance with practice. Motor training increased Manual Muscle Test scores and the isometric force of subjects' shoulders and upper arms. Moreover, motor training increased fractional anisotropy (FA) values in the cingulum of the left hemisphere by 6.02% on average, indicating localized white matter microstructure changes induced by activity-dependent modulation of axon diameter, myelin thickness or axon number. This body-machine interface may serve as a platform to develop a new generation of assistive-rehabilitative devices that promote the use of, and that re-strengthen, the motor and sensory functions that survived the injury.

  19. A machine learning system to improve heart failure patient assistance.

    PubMed

    Guidi, Gabriele; Pettenati, Maria Chiara; Melillo, Paolo; Iadanza, Ernesto

    2014-11-01

    In this paper, we present a clinical decision support system (CDSS) for the analysis of heart failure (HF) patients, providing various outputs such as an HF severity evaluation, HF-type prediction, as well as a management interface that compares the different patients' follow-ups. The whole system is composed of a part of intelligent core and of an HF special-purpose management tool also providing the function to act as interface for the artificial intelligence training and use. To implement the smart intelligent functions, we adopted a machine learning approach. In this paper, we compare the performance of a neural network (NN), a support vector machine, a system with fuzzy rules genetically produced, and a classification and regression tree and its direct evolution, which is the random forest, in analyzing our database. Best performances in both HF severity evaluation and HF-type prediction functions are obtained by using the random forest algorithm. The management tool allows the cardiologist to populate a "supervised database" suitable for machine learning during his or her regular outpatient consultations. The idea comes from the fact that in literature there are a few databases of this type, and they are not scalable to our case.

  20. Massachusetts Institute of Technology Consortium Agreement

    DTIC Science & Technology

    1999-03-01

    This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.

  1. Terminal Ailments Need Not Be Fatal: A Speculative Assessment of the Impact of Online Public Access Catalogs in Academic Settings.

    ERIC Educational Resources Information Center

    Sandler, Mark

    1985-01-01

    Discusses several concerns about nature of online public access catalogs (OPAC) that have particular import to reference librarians: user passivity and loss of control growing out of "human-machine interface" and the larger social context; and the tendency of computerized bibliographic systems to obfuscate human origins of library…

  2. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    NASA Astrophysics Data System (ADS)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  3. An online brain-machine interface using decoding of movement direction from the human electrocorticogram

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten

    2012-08-01

    A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.

  4. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  5. Manipulator system man-machine interface evaluation program. [technology assessment

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Kirkpatrick, M.; Shields, N. L.

    1974-01-01

    Application and requirements for remote manipulator systems for future space missions were investigated. A manipulator evaluation program was established to study the effects of various systems parameters on operator performance of tasks necessary for remotely manned missions. The program and laboratory facilities are described. Evaluation criteria and philosophy are discussed.

  6. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  7. Teleoperator system man-machine interface requirements for satellite retrieval and satellite servicing. Volume 1: Requirements

    NASA Technical Reports Server (NTRS)

    Malone, T. B.

    1972-01-01

    Requirements were determined analytically for the man machine interface for a teleoperator system performing on-orbit satellite retrieval and servicing. Requirements are basically of two types; mission/system requirements, and design requirements or design criteria. Two types of teleoperator systems were considered: a free flying vehicle, and a shuttle attached manipulator. No attempt was made to evaluate the relative effectiveness or efficiency of the two system concepts. The methodology used entailed an application of the Essex Man-Systems analysis technique as well as a complete familiarization with relevant work being performed at government agencies and by private industry.

  8. ClearTK 2.0: Design Patterns for Machine Learning in UIMA

    PubMed Central

    Bethard, Steven; Ogren, Philip; Becker, Lee

    2014-01-01

    ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework. PMID:29104966

  9. ClearTK 2.0: Design Patterns for Machine Learning in UIMA.

    PubMed

    Bethard, Steven; Ogren, Philip; Becker, Lee

    2014-05-01

    ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework.

  10. A Machine Learning System for Analyzing Human Tactics in a Game

    NASA Astrophysics Data System (ADS)

    Ito, Hirotaka; Tanaka, Toshimitsu; Sugie, Noboru

    In order to realize advanced man-machine interfaces, it is desired to develop a system that can infer the mental state of human users and then return appropriate responses. As the first step toward the above goal, we developed a system capable of inferring human tactics in a simple game played between the system and a human. We present a machine learning system that plays a color expectation game. The system infers the tactics of the opponent, and then decides the action based on the result. We employed a modified version of classifier system like XCS in order to design the system. In addition, three methods are proposed in order to accelerate the learning rate. They are a masking method, an iterative method, and tactics templates. The results of computer experiments confirmed that the proposed methods effectively accelerate the machine learning. The masking method and the iterative method are effective to a simple strategy that considers only a part of past information. However, study speed of these methods is not enough for the tactics that refers to a lot of past information. For the case, the tactics template was able to settle the study rapidly when the tactics is identified.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.R. McJunkin; R.L. Boring; M.A. McQueen

    Situational awareness in the operations and supervision of a industrial system means that decision making entity, whether machine or human, have the important data presented in a timely manner. An optimal presentation of information such that the operator has the best opportunity accurately interpret and react to anomalies due to system degradation, failures or adversaries. Anticipated problems are a matter for system design; however, the paper will focus on concepts for situational awareness enhancement for a human operator when the unanticipated or unaddressed event types occur. Methodology for human machine interface development and refinement strategy is described for a syntheticmore » fuels plant model. A novel concept for adaptively highlighting the most interesting information in the system and a plan for testing the methodology is described.« less

  12. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands

    PubMed Central

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961

  13. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands.

    PubMed

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.

  14. Human factors with nonhumans - Factors that affect computer-task performance

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.

  15. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  16. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  17. A study on the application of voice interaction in automotive human machine interface experience design

    NASA Astrophysics Data System (ADS)

    Huang, Zhaohui; Huang, Xiemin

    2018-04-01

    This paper, firstly, introduces the application trend of the integration of multi-channel interactions in automotive HMI ((Human Machine Interface) from complex information models faced by existing automotive HMI and describes various interaction modes. By comparing voice interaction and touch screen, gestures and other interaction modes, the potential and feasibility of voice interaction in automotive HMI experience design are concluded. Then, the related theories of voice interaction, identification technologies, human beings' cognitive models of voices and voice design methods are further explored. And the research priority of this paper is proposed, i.e. how to design voice interaction to create more humane task-oriented dialogue scenarios to enhance interactive experiences of automotive HMI. The specific scenarios in driving behaviors suitable for the use of voice interaction are studied and classified, and the usability principles and key elements for automotive HMI voice design are proposed according to the scenario features. Then, through the user participatory usability testing experiment, the dialogue processes of voice interaction in automotive HMI are defined. The logics and grammars in voice interaction are classified according to the experimental results, and the mental models in the interaction processes are analyzed. At last, the voice interaction design method to create the humane task-oriented dialogue scenarios in the driving environment is proposed.

  18. Advanced system functions for the office information system

    NASA Astrophysics Data System (ADS)

    Ishikawa, Tetsuya

    First, author describes the functions needed for information management system in office. Next, he mentions the requisites for the enhancement of system functions. In order to make enhancement of system functions, he states, it is necessary to examine them comprehensively from every point of view including processing hour and cost. In this paper, he concentrates on the enhancement of man-machine interface (= human interface), that is, how to make system easy to use for the office workers.

  19. Intelligent Systems and Advanced User Interfaces for Design, Operation, and Maintenance of Command Management Systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1998-01-01

    Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process.

  20. Avatars and virtual agents – relationship interfaces for the elderly

    PubMed Central

    2017-01-01

    In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725

  1. Human factors dimensions in the evolution of increasingly automated control rooms for near-earth satellites

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1982-01-01

    The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.

  2. Errare machinale est: the use of error-related potentials in brain-machine interfaces

    PubMed Central

    Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.

    2014-01-01

    The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937

  3. UAS Integration in the NAS Project: Part Task 6 V & V Simulation: Primary Results

    NASA Technical Reports Server (NTRS)

    Rorie, Conrad; Fern, Lisa; Shively, Jay; Santiago, Confesor

    2016-01-01

    This is a presentation of the preliminary results on final V and V (Verification and Validation) activity of [RTCA (Radio Technical Commission for Aeronautics)] SC (Special Committee)-228 DAA (Detect and Avoid) HMI (Human-Machine Interface) requirements for display alerting and guidance.

  4. Advanced technologies for Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Dalton, John T.; Hughes, Peter M.

    1991-01-01

    Advance technologies for Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: technology needs; current technology efforts at GSFC (human-machine interface development, object oriented software development, expert systems, knowledge-based software engineering environments, and high performance VLSI telemetry systems); and test beds.

  5. Determining Value in Higher Education: The Future of Instructional Technology in a Wal-Mart Economy.

    ERIC Educational Resources Information Center

    Tremblay, Wilfred

    1992-01-01

    Discusses value and the economy and examines the changing definition of educational value regarding higher education. Trends in instructional technology resulting from changes in expected educational value are described, including resource sharing, specialization, market expansion, privatization, easier human-machine interfaces, feedback systems,…

  6. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  7. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  8. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  9. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  10. 49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...

  11. Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.

    PubMed

    Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E

    2017-01-01

    This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.

  12. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  13. Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.

    2001-01-01

    The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.

  14. A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body

    PubMed Central

    Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo

    2016-01-01

    Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body—because human tissues exhibit some conductivity at these frequencies—resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard. PMID:27918416

  15. A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body.

    PubMed

    Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo

    2016-12-02

    Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body-because human tissues exhibit some conductivity at these frequencies-resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard.

  16. Optical HMI with biomechanical energy harvesters integrated in textile supports

    NASA Astrophysics Data System (ADS)

    De Pasquale, G.; Kim, SG; De Pasquale, D.

    2015-12-01

    This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.

  17. The reported incidence of man-machine interface issues in Army aviators using the Aviator's Night Vision System (ANVIS) in a combat theatre

    NASA Astrophysics Data System (ADS)

    Hiatt, Keith L.; Rash, Clarence E.

    2011-06-01

    Background: Army Aviators rely on the ANVIS for night operations. Human factors literature notes that the ANVIS man-machine interface results in reports of visual and spinal complaints. This is the first study that has looked at these issues in the much harsher combat environment. Last year, the authors reported on the statistically significant (p<0.01) increased complaints of visual discomfort, degraded visual cues, and incidence of static and dynamic visual illusions in the combat environment [Proc. SPIE, Vol. 7688, 76880G (2010)]. In this paper we present the findings regarding increased spinal complaints and other man-machine interface issues found in the combat environment. Methods: A survey was administered to Aircrew deployed in support of Operation Enduring Freedom (OEF). Results: 82 Aircrew (representing an aggregate of >89,000 flight hours of which >22,000 were with ANVIS) participated. Analysis demonstrated high complaints of almost all levels of back and neck pain. Additionally, the use of body armor and other Aviation Life Support Equipment (ALSE) caused significant ergonomic complaints when used with ANVIS. Conclusions: ANVIS use in a combat environment resulted in higher and different types of reports of spinal symptoms and other man-machine interface issues over what was previously reported. Data from this study may be more operationally relevant than that of the peacetime literature as it is derived from actual combat and not from training flights, and it may have important implications about making combat predictions based on performance in training scenarios. Notably, Aircrew remarked that they could not execute the mission without ANVIS and ALSE and accepted the degraded ergonomic environment.

  18. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces.

    PubMed

    Liu, Yuhao; Norton, James J S; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R; Liu, Hank; Yan, Lingqing; Tran, Phat L; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A

    2016-11-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics.

  19. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces

    PubMed Central

    Liu, Yuhao; Norton, James J. S.; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R.; Liu, Hank; Yan, Lingqing; Tran, Phat L.; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A.; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J.; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A.

    2016-01-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics. PMID:28138529

  20. Rapid Prototyping and the Human Factors Engineering Process

    DTIC Science & Technology

    2016-08-29

    8217 without the effort and cost associated with conventional man -in-the-loop simulation. Advocates suggest that rapid prototyping is compatible with...use should be made of man -in-the loop simulation to supplement those analyses, but that such simulation is expensive and time consuming, precluding...conventional man -in-the- loop simulation. Rapid prototyping involves the construction and use of an executable model of a human-machine interface

  1. The future of the provision process for mobility assistive technology: a survey of providers.

    PubMed

    Dicianno, Brad E; Joseph, James; Eckstein, Stacy; Zigler, Christina K; Quinby, Eleanor J; Schmeler, Mark R; Schein, Richard M; Pearlman, Jon; Cooper, Rory A

    2018-03-20

    The purpose of this study was to evaluate the opinions of providers of mobility assistive technologies to help inform a research agenda and set priorities. This survey study was anonymous and gathered opinions of individuals who participate in the process to provide wheelchairs and other assistive technologies to clients. Participants were asked to rank the importance of developing various technologies and rank items against each other in terms of order of importance. Participants were also asked to respond to several open-ended questions or statements. A total of 161 providers from 35 states within the USA consented to participation and completed the survey. This survey revealed themes of advanced wheelchair design, assistive robotics and intelligent systems, human machine interfaces and smart device applications. It also outlined priorities for researchers to provide continuing education to clients and providers. These themes will be used to develop research and development priorities. Implications for Rehabilitation • Research in advanced wheelchair design is needed to facilitate travel and environmental access with wheelchairs and to develop alternative power sources for wheelchairs.• New assistive robotics and intelligent systems are needed to help wheelchairs overcome obstacles or self-adjust, assist wheelchair navigation in the community, assist caregivers and transfers, and aid ambulation.• Innovations in human machine interfaces may help advance the control of mobility devices and robots with the brain, eye movements, facial gesture recognition or other systems.• Development of new smart devices is needed for better control of the environment, monitoring activity and promoting healthy behaviours.

  2. Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.

    PubMed

    Griffiths, Paul G; Gillespie, R Brent

    2005-01-01

    This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.

  3. A Human Factors Perspective on Alarm System Research and Development 2000 to 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curt Braun; John Grimes; Eric Shaver

    By definition, alarms serve to notify human operators of out-of-parameter conditions that could threaten equipment, the environment, product quality and, of course, human life. Given the complexities of industrial systems, human machine interfaces, and the human operator, the understanding of how alarms and humans can best work together to prevent disaster is continually developing. This review examines advances in alarm research and development from 2000 to 2010 and includes the writings of trade professionals, engineering and human factors researchers, and standards organizations with the goal of documenting advances in alarms system design, research, and implementation.

  4. WTEC panel report on European nuclear instrumentation and controls

    NASA Technical Reports Server (NTRS)

    White, James D.; Lanning, David D.; Beltracchi, Leo; Best, Fred R.; Easter, James R.; Oakes, Lester C.; Sudduth, A. L.

    1991-01-01

    Control and instrumentation systems might be called the 'brain' and 'senses' of a nuclear power plant. As such they become the key elements in the integrated operation of these plants. Recent developments in digital equipment have allowed a dramatic change in the design of these instrument and control (I&C) systems. New designs are evolving with cathode ray tube (CRT)-based control rooms, more automation, and better logical information for the human operators. As these new advanced systems are developed, various decisions must be made about the degree of automation and the human-to-machine interface. Different stages of the development of control automation and of advanced digital systems can be found in various countries. The purpose of this technology assessment is to make a comparative evaluation of the control and instrumentation systems that are being used for commercial nuclear power plants in Europe and the United States. This study is limited to pressurized water reactors (PWR's). Part of the evaluation includes comparisons with a previous similar study assessing Japanese technology.

  5. Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control.

    PubMed

    Miller, Christopher A; Parasuraman, Raja

    2007-02-01

    To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.

  6. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. OTM Machine Acceptance: In the Arab Culture

    NASA Astrophysics Data System (ADS)

    Rashed, Abdullah; Santos, Henrique

    Basically, neglecting the human factor is one of the main reasons for system failures or for technology rejection, even when important technologies are considered. Biometrics mostly have the characteristics needed for effortless acceptance, such as easiness and usefulness, that are essential pillars of acceptance models such as TAM (technology acceptance model). However, it should be investigated. Many studies have been carried out to research the issues of technology acceptance in different cultures, especially the western culture. Arabic culture lacks these types of studies with few publications in this field. This paper introduces a new biometric interface for ATM machines. This interface depends on a promising biometrics which is odour. To discover the acceptance of this biometrics, we distributed a questionnaire via a web site and called for participation in the Arab Area and found that most respondents would accept to use odour.

  8. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  9. Personalized keystroke dynamics for self-powered human--machine interfacing.

    PubMed

    Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin

    2015-01-27

    The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.

  10. CESAR research in intelligent machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.

    1986-01-01

    The Center for Engineering Systems Advanced Research (CESAR) was established in 1983 as a national center for multidisciplinary, long-range research and development in machine intelligence and advanced control theory for energy-related applications. Intelligent machines of interest here are artificially created operational systems that are capable of autonomous decision making and action. The initial emphasis for research is remote operations, with specific application to dexterous manipulation in unstructured dangerous environments where explosives, toxic chemicals, or radioactivity may be present, or in other environments with significant risk such as coal mining or oceanographic missions. Potential benefits include reduced risk to man inmore » hazardous situations, machine replication of scarce expertise, minimization of human error due to fear or fatigue, and enhanced capability using high resolution sensors and powerful computers. A CESAR goal is to explore the interface between the advanced teleoperation capability of today, and the autonomous machines of the future.« less

  11. Toward FRP-Based Brain-Machine Interfaces—Single-Trial Classification of Fixation-Related Potentials

    PubMed Central

    Finke, Andrea; Essig, Kai; Marchioro, Giuseppe; Ritter, Helge

    2016-01-01

    The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction. PMID:26812487

  12. Intelligent man/machine interfaces on the space station

    NASA Technical Reports Server (NTRS)

    Daughtrey, Rodney S.

    1987-01-01

    Some important topics in the development of good, intelligent, usable man/machine interfaces for the Space Station are discussed. These computer interfaces should adhere strictly to three concepts or doctrines: generality, simplicity, and elegance. The motivation for natural language interfaces and their use and value on the Space Station, both now and in the future, are discussed.

  13. Some Ideas on the Microcomputer and the Information/Knowledge Workstation.

    ERIC Educational Resources Information Center

    Boon, J. A.; Pienaar, H.

    1989-01-01

    Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…

  14. Human machine interface to manually drive rhombic like vehicles in remote handling operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo

    2015-07-01

    In the thermonuclear experimental reactor ITER, a vehicle named CTS is designed to transport a container with activated components inside the buildings. In nominal operations, the CTS is autonomously guided under supervision. However, in some unexpected situations, such as in rescue and recovery operations, the autonomous mode must be overridden and the CTS must be remotely guided by an operator. The CTS is a rhombic-like vehicle, with two drivable and steerable wheels along its longitudinal axis, providing omni-directional capabilities. The rhombic kinematics correspond to four control variables, which are difficult to manage in manual mode operation. This paper proposes amore » Human Machine Interface (HMI) to remotely guide the vehicle in manual mode. The proposed solution is implemented using a HMI with an encoder connected to a micro-controller and an analog 2-axis joystick. Experimental results were obtained comparing the proposed solution with other controller devices in different scenarios and using a software platform that simulates the kinematics and dynamics of the vehicle. (authors)« less

  15. Task-Oriented, Naturally Elicited Speech (TONE) Database for the Force Requirements Expert System, Hawaii (FRESH)

    DTIC Science & Technology

    1988-09-01

    Group Subgroup Command and control; Computational linguistics; expert system voice recognition; man- machine interface; U.S. Government 19 Abstract...simulates the characteristics of FRESH on a smaller scale. This study assisted NOSC in developing a voice-recognition, man- machine interface that could...scale. This study assisted NOSC in developing a voice-recogni- tion, man- machine interface that could be used with TONE and upgraded at a later date

  16. Earth orbital teleoperator visual system evaluation program

    NASA Technical Reports Server (NTRS)

    Shields, N. L., Jr.; Kirkpatrick, M., III; Frederick, P. N.; Malone, T. B.

    1975-01-01

    Empirical tests of range estimation accuracy and resolution, via television, under monoptic and steroptic viewing conditions are discussed. Test data are used to derive man machine interface requirements and make design decisions for an orbital remote manipulator system. Remote manipulator system visual tasks are given and the effects of system parameters of these tasks are evaluated.

  17. Assessment of a human computer interface prototyping environment

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1993-01-01

    A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.

  18. Data Publication and Interoperability for Long Tail Researchers via the Open Data Repository's (ODR) Data Publisher.

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.

  19. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces

    PubMed Central

    Heo, Jeong; Yoon, Heenam; Park, Kwang Suk

    2017-01-01

    Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain–computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles. PMID:28644398

  20. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.

    PubMed

    Heo, Jeong; Yoon, Heenam; Park, Kwang Suk

    2017-06-23

    Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.

  1. Humans, Intelligent Technology, and Their Interface: A Study of Brown’s Point

    DTIC Science & Technology

    2017-12-01

    known about the role of drivers. When combining humans and intelligent technology (machines), such as self-driving vehicles, how people think about...disrupt the entire transportation industry and potentially change how society moves people and goods. The findings of the investigation are likely...The power of suggestion is very important to understand and consider when framing and bringing meaning to new technology, which points to looking at

  2. Hybrid soft computing systems for electromyographic signals analysis: a review.

    PubMed

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  3. Hybrid soft computing systems for electromyographic signals analysis: a review

    PubMed Central

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  4. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  5. Human Machine Interface Programming and Testing

    NASA Technical Reports Server (NTRS)

    Foster, Thomas Garrison

    2013-01-01

    Human Machine Interface (HMI) Programming and Testing is about creating graphical displays to mimic mission critical ground control systems in order to provide NASA engineers with the ability to monitor the health management of these systems in real time. The Health Management System (HMS) is an online interactive human machine interface system that monitors all Kennedy Ground Control Subsystem (KGCS) hardware in the field. The Health Management System is essential to NASA engineers because it allows remote control and monitoring of the health management systems of all the Programmable Logic Controllers (PLC) and associated field devices. KGCS will have equipment installed at the launch pad, Vehicle Assembly Building, Mobile Launcher, as well as the Multi-Purpose Processing Facility. I am designing graphical displays to monitor and control new modules that will be integrated into the HMS. The design of the display screen will closely mimic the appearance and functionality of the actual modules. There are many different field devices used to monitor health management and each device has its own unique set of health management related data, therefore each display must also have its own unique way to display this data. Once the displays are created, the RSLogix5000 application is used to write software that maps all the required data read from the hardware to the graphical display. Once this data is mapped to its corresponding display item, the graphical display and hardware device will be connected through the same network in order to test all possible scenarios and types of data the graphical display was designed to receive. Test Procedures will be written to thoroughly test out the displays and ensure that they are working correctly before being deployed to the field. Additionally, the Kennedy Ground Controls Subsystem's user manual will be updated to explain to the NASA engineers how to use the new module displays.

  6. Being human in a global age of technology.

    PubMed

    Whelton, Beverly J B

    2016-01-01

    This philosophical enquiry considers the impact of a global world view and technology on the meaning of being human. The global vision increases our awareness of the common bond between all humans, while technology tends to separate us from an understanding of ourselves as human persons. We review some advances in connecting as community within our world, and many examples of technological changes. This review is not exhaustive. The focus is to understand enough changes to think through the possibility of healthcare professionals becoming cyborgs, human-machine units that are subsequently neither human and nor machine. It is seen that human technology interfaces are a different way of interacting but do not change what it is to be human in our rational capacities of providing meaningful speech and freely chosen actions. In the highly technical environment of the ICU, expert nurses work in harmony with both the technical equipment and the patient. We used Heidegger to consider the nature of equipment, and Descartes to explore unique human capacities. Aristotle, Wallace, Sokolowski, and Clarke provide a summary of humanity as substantial and relational. © 2015 John Wiley & Sons Ltd.

  7. Nanoscale wear and machining behavior of nanolayer interfaces.

    PubMed

    Nie, Xueyuan; Zhang, Peng; Weiner, Anita M; Cheng, Yang-Tse

    2005-10-01

    An atomic force microscope was used to subnanometer incise a nanomultilayer to consequently expose individual nanolayers and interfaces on which sliding and scanning nanowear/machining have been performed. The letter reports the first observation on the nanoscale where (i) atomic debris forms in a collective manner, most-likely by deformation and rupture of atomic bonds, and (ii) the nanolayer interfaces possess a much higher wear resistance (desired for nanomachines) or lower machinability (not desired for nanomachining) than the layers.

  8. Contrasting State-of-the-Art in the Machine Scoring of Short-Form Constructed Responses

    ERIC Educational Resources Information Center

    Shermis, Mark D.

    2015-01-01

    This study compared short-form constructed responses evaluated by both human raters and machine scoring algorithms. The context was a public competition on which both public competitors and commercial vendors vied to develop machine scoring algorithms that would match or exceed the performance of operational human raters in a summative high-stakes…

  9. Closeout of CRADA JSA 2012S004: Chapter 5, Integrated Control System, of the document of the ESS Conceptual Design Report, publicly available at https://europeanspallationsource.se/accelerator-documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Satogata, Todd

    2013-04-22

    The integrated control system (ICS) is responsible for the whole ESS machine and facility: accelerator, target, neutron scattering instruments and conventional facilities. This unified approach keeps the costs of development, maintenance and support relatively low. ESS has selected a standardised, field-proven controls framework, the Experimental Physics and Industrial Control System (EPICS), which was originally developed jointly by Argonne and Los Alamos National Laboratories. Complementing this selection are best practices and experience from similar facilities regarding platform standardisation, control system development and device integration and commissioning. The components of ICS include the control system core, the control boxes, the BLED databasemore » management system, and the human machine interface. The control system core is a set of systems and tools that make it possible for the control system to provide required data, information and services to engineers, operators, physicists and the facility itself. The core components are the timing system that makes possible clock synchronisation across the facility, the machine protection system (MPS) and the personnel protection system (PPS) that prevent damage to the machine and personnel, and a set of control system services. Control boxes are servers that control a collection of equipment (for example a radio frequency cavity). The integrated control system will include many control boxes that can be assigned to one supplier, such as an internal team, a collaborating institute or a commercial vendor. This approach facilitates a clear division of responsibilities and makes integration much easier. A control box is composed of a standardised hardware platform, components, development tools and services. On the top level, it interfaces with the core control system components (timing, MPS, PPS) and with the human-machine interface. At the bottom, it interfaces with the equipment and parts of the facility through a set of analog and digital signals, real-time control loops and other communication buses. The ICS central data management system is named BLED (beam line element databases). BLED is a set of databases, tools and services that is used to store, manage and access data. It holds vital control system configuration and physics-related (lattice) information about the accelerator, target and instruments. It facilitates control system configuration by bringing together direct input-output controller (IOC) con guration and real-time data from proton and neutron beam line models. BLED also simplifies development and speeds up the code-test-debug cycle. The set of tools that access BLED will be tailored to the needs of different categories of users, such as ESS staff physicists, engineers, and operators; external partner laboratories; and visiting experimental instrument users. The human-machine interface is vital to providing a high-quality experience to ICS users. It encompasses a wide array of devices and software tools, from control room screens to engineer terminal windows; from beam physics data tools to post-mortem data analysis tools. It serves users with a wide range of skills from widely varied backgrounds. The Controls Group is developing a set of user profiles to accommodate this diverse range of use-cases and users.« less

  10. Structural health monitoring for bolt loosening via a non-invasive vibro-haptics human-machine cooperative interface

    NASA Astrophysics Data System (ADS)

    Pekedis, Mahmut; Mascerañas, David; Turan, Gursoy; Ercan, Emre; Farrar, Charles R.; Yildiz, Hasan

    2015-08-01

    For the last two decades, developments in damage detection algorithms have greatly increased the potential for autonomous decisions about structural health. However, we are still struggling to build autonomous tools that can match the ability of a human to detect and localize the quantity of damage in structures. Therefore, there is a growing interest in merging the computational and cognitive concepts to improve the solution of structural health monitoring (SHM). The main object of this research is to apply the human-machine cooperative approach on a tower structure to detect damage. The cooperation approach includes haptic tools to create an appropriate collaboration between SHM sensor networks, statistical compression techniques and humans. Damage simulation in the structure is conducted by releasing some of the bolt loads. Accelerometers are bonded to various locations of the tower members to acquire the dynamic response of the structure. The obtained accelerometer results are encoded in three different ways to represent them as a haptic stimulus for the human subjects. Then, the participants are subjected to each of these stimuli to detect the bolt loosened damage in the tower. Results obtained from the human-machine cooperation demonstrate that the human subjects were able to recognize the damage with an accuracy of 88 ± 20.21% and response time of 5.87 ± 2.33 s. As a result, it is concluded that the currently developed human-machine cooperation SHM may provide a useful framework to interact with abstract entities such as data from a sensor network.

  11. Keyboard and message evaluation for cockpit input to data link

    DOT National Transportation Integrated Search

    1971-11-01

    The project reported-herein studied some methods for implementation of the man-machine interface of Digital Data Link for Air Traffic Control. An analysis of information transfer requirements indicated that a vocabulary or less than 200 words could y...

  12. Distribution of man-machine controls in space teleoperation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1982-01-01

    The distribution of control between man and machine is dependent on the tasks, available technology, human performance characteristics and control goals. This dependency has very specific projections on systems designed for teleoperation in space. This paper gives a brief outline of the space-related issues and presents the results of advanced teleoperator research and development at the Jet Propulsion Laboratory (JPL). The research and development work includes smart sensors, flexible computer controls and intelligent man-machine interface devices in the area of visual displays and kinesthetic man-machine coupling in remote control of manipulators. Some of the development results have been tested at the Johnson Space Center (JSC) using the simulated full-scale Shuttle Remote Manipulator System (RMS). The research and development work for advanced space teleoperation is far from complete and poses many interdisciplinary challenges.

  13. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  14. Quantitative Evaluation of Heavy Duty Machine Tools Remanufacturing Based on Modified Catastrophe Progression Method

    NASA Astrophysics Data System (ADS)

    shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu

    2017-11-01

    The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.

  15. Advanced Military Pay System Concepts. Evaluation of Opportunities through Information Technology.

    DTIC Science & Technology

    1980-07-01

    trans- mdtter (UART) to interface with a modem . The main processor was then responsible for input and output between main memory and the UART...digital, "run-length" encoding scheme which is very effective in reducing the amount of data to be transmitted. Machines of this type include a modem ...Output control as well as data compression will be combined with appropriate modems or interfaces to digital transmission channels and microprocessor

  16. The SmartHand transradial prosthesis

    PubMed Central

    2011-01-01

    Background Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. Methods SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. Results SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Conclusions Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies. PMID:21600048

  17. Physiological properties of brain-machine interface input signals.

    PubMed

    Slutzky, Marc W; Flint, Robert D

    2017-08-01

    Brain-machine interfaces (BMIs), also called brain-computer interfaces (BCIs), decode neural signals and use them to control some type of external device. Despite many experimental successes and terrific demonstrations in animals and humans, a high-performance, clinically viable device has not yet been developed for widespread usage. There are many factors that impact clinical viability and BMI performance. Arguably, the first of these is the selection of brain signals used to control BMIs. In this review, we summarize the physiological characteristics and performance-including movement-related information, longevity, and stability-of multiple types of input signals that have been used in invasive BMIs to date. These include intracortical spikes as well as field potentials obtained inside the cortex, at the surface of the cortex (electrocorticography), and at the surface of the dura mater (epidural signals). We also discuss the potential for future enhancements in input signal performance, both by improving hardware and by leveraging the knowledge of the physiological characteristics of these signals to improve decoding and stability. Copyright © 2017 the American Physiological Society.

  18. Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra

    2017-05-14

    To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treatingmore » a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.« less

  19. Monitoring osseointegration and developing intelligent systems (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Salvino, Liming W.

    2017-05-01

    Effective monitoring of structural and biological systems is an extremely important research area that enables technology development for future intelligent devices, platforms, and systems. This presentation provides an overview of research efforts funded by the Office of Naval Research (ONR) to establish structural health monitoring (SHM) methodologies in the human domain. Basic science efforts are needed to utilize SHM sensing, data analysis, modeling, and algorithms to obtain the relevant physiological and biological information for human-specific health and performance conditions. This overview of current research efforts is based on the Monitoring Osseointegrated Prosthesis (MOIP) program. MOIP develops implantable and intelligent prosthetics that are directly anchored to the bone of residual limbs. Through real-time monitoring, sensing, and responding to osseointegration of bones and implants as well as interface conditions and environment, our research program aims to obtain individualized actionable information for implant failure identification, load estimation, infection mitigation and treatment, as well as healing assessment. Looking ahead to achieve ultimate goals of SHM, we seek to expand our research areas to cover monitoring human, biological and engineered systems, as well as human-machine interfaces. Examples of such include 1) brainwave monitoring and neurological control, 2) detecting and evaluating brain injuries, 3) monitoring and maximizing human-technological object teaming, and 4) closed-loop setups in which actions can be triggered automatically based on sensors, actuators, and data signatures. Finally, some ongoing and future collaborations across different disciplines for the development of knowledge automation and intelligent systems will be discussed.

  20. Self-assembling fluidic machines

    NASA Astrophysics Data System (ADS)

    Grzybowski, Bartosz A.; Radkowski, Michal; Campbell, Christopher J.; Lee, Jessamine Ng; Whitesides, George M.

    2004-03-01

    This letter describes dynamic self-assembly of two-component rotors floating at the interface between liquid and air into simple, reconfigurable mechanical systems ("machines"). The rotors are powered by an external, rotating magnetic field, and their positions within the interface are controlled by: (i) repulsive hydrodynamic interactions between them and (ii) by localized magnetic fields produced by an array of small electromagnets located below the plane of the interface. The mechanical functions of the machines depend on the spatiotemporal sequence of activation of the electromagnets.

  1. We can't explore space without it - Common human space needs for exploration spaceflight

    NASA Technical Reports Server (NTRS)

    Daues, K. R.; Erwin, H. O.

    1992-01-01

    An overview is conducted of physiological, psychological, and human-interface requirements for manned spaceflight programs to establish common criteria. Attention is given to the comfort levels relevant to human support in exploration mission spacecraft and planetary habitats, and three comfort levels (CLs) are established. The levels include: (1) CL-1 for basic crew life support; (2) CL-2 for enabling the nominal completion of mission science; and (3) CL-3 which provides for enhanced life support and user-friendly interface systems. CL-2 support systems can include systems for EVA, workstations, and activity centers for repairs and enhanced utilization of payload and human/machine integration. CL-3 supports can be useful for maintaining crew psychological and physiological health as well as the design of comfortable and earthlike surroundings. While all missions require CL-1 commonality, CL-2 commonality is required only for EVA systems, display nomenclature, and restraint designs.

  2. Final Report of Work Done on Contract NONR-4010(03).

    ERIC Educational Resources Information Center

    Chapanis, Alphonse

    The 24 papers listed report the findings of a study funded by the Office of Naval Research. The study concentrated on the sensory and cognitive factors in man-machine interfaces. The papers are categorized into three groups: perception studies, human engineering studies, and methodological papers. A brief summary of the most noteworthy findings in…

  3. Reducing lumber thickness variation using real-time statistical process control

    Treesearch

    Thomas M. Young; Brian H. Bond; Jan Wiedenbeck

    2002-01-01

    A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...

  4. Massachusetts Institute of Technology Consortium Agreement

    DTIC Science & Technology

    1999-03-01

    In this, our second progress report of the Phase Two Home Automation and Healthcare Consortium at the Brit and Alex d’Arbeloff Laboratory for...Covered here are the diverse fields of home automation and healthcare research, ranging from human modeling, patient monitoring, and diagnosis to new...sensors and actuators, physical aids, human-machine interface and home automation infrastructure. These results will be presented at the upcoming General Assembly of the Consortium held on October 27-October 30, 1998 at MIT.

  5. Flexible and stretchable electronics for biointegrated devices.

    PubMed

    Kim, Dae-Hyeong; Ghaffari, Roozbeh; Lu, Nanshu; Rogers, John A

    2012-01-01

    Advances in materials, mechanics, and manufacturing now allow construction of high-quality electronics and optoelectronics in forms that can readily integrate with the soft, curvilinear, and time-dynamic surfaces of the human body. The resulting capabilities create new opportunities for studying disease states, improving surgical procedures, monitoring health/wellness, establishing human-machine interfaces, and performing other functions. This review summarizes these technologies and illustrates their use in forms integrated with the brain, the heart, and the skin.

  6. An All-Silk-Derived Dual-Mode E-skin for Simultaneous Temperature-Pressure Detection.

    PubMed

    Wang, Chunya; Xia, Kailun; Zhang, Mingchao; Jian, Muqiang; Zhang, Yingying

    2017-11-15

    Flexible skin-mimicking electronics are highly desired for development of smart human-machine interfaces and wearable human-health monitors. Human skins are able to simultaneously detect different information, such as touch, friction, temperature, and humidity. However, due to the mutual interferences of sensors with different functions, it is still a big challenge to fabricate multifunctional electronic skins (E-skins). Herein, a combo temperature-pressure E-skin is reported through assembling a temperature sensor and a strain sensor in both of which flexible and transparent silk-nanofiber-derived carbon fiber membranes (SilkCFM) are used as the active material. The temperature sensor presents high temperature sensitivity of 0.81% per centigrade. The strain sensor shows an extremely high sensitivity with a gauge factor of ∼8350 at 50% strain, enabling the detection of subtle pressure stimuli that induce local strain. Importantly, the structure of the SilkCFM in each sensor is designed to be passive to other stimuli, enabling the integrated E-skin to precisely detect temperature and pressure at the same time. It is demonstrated that the E-skin can detect and distinguish exhaling, finger pressing, and spatial distribution of temperature and pressure, which cannot be realized using single mode sensors. The remarkable performance of the silk-based combo temperature-pressure sensor, together with its green and large-scalable fabrication process, promising its applications in human-machine interfaces and soft electronics.

  7. Influence of repeated screw tightening on bacterial leakage along the implant-abutment interface.

    PubMed

    do Nascimento, Cássio; Pedrazzi, Vinícius; Miani, Paola Kirsten; Moreira, Larissa Daher; de Albuquerque, Rubens Ferreira

    2009-12-01

    Bacterial penetration along the implant-abutment interface as a consequence of abutment screw loosening has been reported in a number of recent studies. The aim of this in vitro study was to investigate the influence of repeated tightening of the abutment screw on leakage of Streptococcus mutans along the interface between implants and pre-machined abutments. Twenty pre-machined abutments with a plastic sleeve were used. The abutment screws were tightened to 32 N cm in group 1 (n=10 - control) and to 32 N cm, loosened and re-tightened with the same torque twice in group 2 (n=10). The assemblies were completely immersed in 5 ml of Tryptic Soy Broth medium inoculated with S. mutans and incubated for 14 days. After this period, contamination of the implant internal threaded chamber was evaluated using the DNA Checkerboard method. Microorganisms were found on the internal surfaces of both groups evaluated. However, bacterial counts in group 2 were significantly higher than that in the control group (P<0.05). These results suggest that bacterial leakage between implants and abutments occurs even under unloaded conditions and at a higher intensity when the abutment screw is tightened and loosened repeatedly.

  8. An extremely lightweight fingernail worn prosthetic interface device

    NASA Astrophysics Data System (ADS)

    Yetkin, Oguz; Ahluwalia, Simranjit; Silva, Dinithi; Kasi-Okonye, Isioma; Volker, Rachael; Baptist, Joshua R.; Popa, Dan O.

    2016-05-01

    Upper limb prosthetics are currently operated using several electromyography sensors mounted on an amputee's residual limb. In order for any prosthetic driving interface to be widely adopted, it needs to be responsive, lightweight, and out of the way when not being used. In this paper we discuss the possibility of replacing such electrodes with fingernail optical sensor systems mounted on the sound limb. We present a prototype device that can detect pinch gestures and communicate with the prosthetic system. The device detects the relative position of fingers to each other by measuring light transmitted via tissue. Applications are not limited to prosthetic control, but can be extended to other human-machine interfaces.

  9. Computer-assisted visual interactive recognition and its prospects of implementation over the Internet

    NASA Astrophysics Data System (ADS)

    Zou, Jie; Gattani, Abhishek

    2005-01-01

    When completely automated systems don't yield acceptable accuracy, many practical pattern recognition systems involve the human either at the beginning (pre-processing) or towards the end (handling rejects). We believe that it may be more useful to involve the human throughout the recognition process rather than just at the beginning or end. We describe a methodology of interactive visual recognition for human-centered low-throughput applications, Computer Assisted Visual InterActive Recognition (CAVIAR), and discuss the prospects of implementing CAVIAR over the Internet. The novelty of CAVIAR is image-based interaction through a domain-specific parameterized geometrical model, which reduces the semantic gap between humans and computers. The user may interact with the computer anytime that she considers its response unsatisfactory. The interaction improves the accuracy of the classification features by improving the fit of the computer-proposed model. The computer makes subsequent use of the parameters of the improved model to refine not only its own statistical model-fitting process, but also its internal classifier. The CAVIAR methodology was applied to implement a flower recognition system. The principal conclusions from the evaluation of the system include: 1) the average recognition time of the CAVIAR system is significantly shorter than that of the unaided human; 2) its accuracy is significantly higher than that of the unaided machine; 3) it can be initialized with as few as one training sample per class and still achieve high accuracy; and 4) it demonstrates a self-learning ability. We have also implemented a Mobile CAVIAR system, where a pocket PC, as a client, connects to a server through wireless communication. The motivation behind a mobile platform for CAVIAR is to apply the methodology in a human-centered pervasive environment, where the user can seamlessly interact with the system for classifying field-data. Deploying CAVIAR to a networked mobile platform poses the challenge of classifying field images and programming under constraints of display size, network bandwidth, processor speed, and memory size. Editing of the computer-proposed model is performed on the handheld while statistical model fitting and classification take place on the server. The possibility that the user can easily take several photos of the object poses an interesting information fusion problem. The advantage of the Internet is that the patterns identified by different users can be pooled together to benefit all peer users. When users identify patterns with CAVIAR in a networked setting, they also collect training samples and provide opportunities for machine learning from their intervention. CAVIAR implemented over the Internet provides a perfect test bed for, and extends, the concept of Open Mind Initiative proposed by David Stork. Our experimental evaluation focuses on human time, machine and human accuracy, and machine learning. We devoted much effort to evaluating the use of our image-based user interface and on developing principles for the evaluation of interactive pattern recognition system. The Internet architecture and Mobile CAVIAR methodology have many applications. We are exploring in the directions of teledermatology, face recognition, and education.

  10. Highly Stretchable Core-Sheath Fibers via Wet-Spinning for Wearable Strain Sensors.

    PubMed

    Tang, Zhenhua; Jia, Shuhai; Wang, Fei; Bian, Changsheng; Chen, Yuyu; Wang, Yonglin; Li, Bo

    2018-02-21

    Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.

  11. Program Predicts Time Courses of Human/Computer Interactions

    NASA Technical Reports Server (NTRS)

    Vera, Alonso; Howes, Andrew

    2005-01-01

    CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.

  12. Ant-Based Cyber Defense (also known as

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn Fink, PNNL

    2015-09-29

    ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.

  13. Next Generation Munitions Handler: Human-Machine Interface and Preliminary Performance Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draper, J.V.; Jansen, J.F.; Pin, F.G.

    1999-04-25

    The Next Generation Munitions Handler/Advanced Technology Demonstrator (NGMI-VATTD) is a technology demonstrator for the application of an advanced robotic device for re-arming U.S. Air Force (USAF) and U.S. Navy (USN) tactical fighters. It comprises two key hardware components: a heavy-lift dexterous manipulator (HDM) and a nonholonomic mobility platform. The NGMWATTD is capable of lifting weapons up to 4400 kg (2000 lb) and placing them on any weapons rack on existing fighters (including the F-22 Raptor). This report describes the NGMH mission with particular reference to human-machine interfaces. It also describes preliminary testing to garner feedback about the heavy-lift manipulator armmore » from experienced fighter load crewmen. The purpose of the testing was to provide preliminary information about control system parameters and to gather feed- back from users about manipulator arm functionality. To that end, the Air Force load crewmen interacted with the NGMWATTD in an informal testing session and provided feedback about the performance of the system. Certain con- trol system parameters were changed during the course of the testing and feedback from the participants was used to make a rough estimate of "good" initial operating parameters. Later, formal testing will concentrate within this range to identify optimal operating parameters. User reactions to the HDM were generally positive, All of the USAF personnel were favorably impressed with the capabilities of the system. Fine-tuning operating parameters created a system even more favorably regarded by the load crews. Further adjustment to control system parameters will result in a system that is operationally efficient, easy to use, and well accepted by users.« less

  14. Interface Metaphors for Interactive Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Robert J.; Blaha, Leslie M.

    To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less

  15. The use of affective interaction design in car user interfaces.

    PubMed

    Gkouskos, Dimitrios; Chen, Fang

    2012-01-01

    Recent developments in the car industry have put Human Machine Interfaces under the spotlight. Developing gratifying human-car interactions has become one of the more prominent areas that car manufacturers want to invest in. However, concepts like emotional design remain foreign to the industry. In this study 12 experts on the field of automobile HMI design were interviewed in order to investigate their needs and opinions of emotional design. Results show that emotional design has yet to be introduced for this context of use. Designers need a tool customized for the intricacies of the car HMI field that can provide them with support and guidance so that they can create emotionally attractive experiences for drivers and passengers alike.

  16. Extending human proprioception to cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David

    2016-04-01

    Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.

  17. A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.

    PubMed

    Hocraffer, Amy; Nam, Chang S

    2017-01-01

    A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  19. Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma.

    PubMed

    Wrzeszczynski, Kazimierz O; Frank, Mayu O; Koyama, Takahiko; Rhrissorrakrai, Kahn; Robine, Nicolas; Utro, Filippo; Emde, Anne-Katrin; Chen, Bo-Juen; Arora, Kanika; Shah, Minita; Vacic, Vladimir; Norel, Raquel; Bilal, Erhan; Bergmann, Ewa A; Moore Vogel, Julia L; Bruce, Jeffrey N; Lassman, Andrew B; Canoll, Peter; Grommes, Christian; Harvey, Steve; Parida, Laxmi; Michelini, Vanessa V; Zody, Michael C; Jobanputra, Vaidehi; Royyuru, Ajay K; Darnell, Robert B

    2017-08-01

    To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts. The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible. NCT02725684.

  20. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  1. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  2. Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia

    1996-01-01

    The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.

  3. Improving air traffic control: Proving new tools or approving the joint human-machine system?

    NASA Technical Reports Server (NTRS)

    Gaillard, Irene; Leroux, Marcel

    1994-01-01

    From the description of a field problem (i.e., designing decision aids for air traffic controllers), this paper points out how a cognitive engineering approach provides the milestones for the evaluation of future joint human-machine systems.

  4. Low Latency Messages on Distributed Memory Multiprocessors

    DOE PAGES

    Rosing, Matt; Saltz, Joel

    1995-01-01

    This article describes many of the issues in developing an efficient interface for communication on distributed memory machines. Although the hardware component of message latency is less than 1 ws on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 μs. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. This article describes several tests performed and many of the issues involvedmore » in supporting low latency messages on distributed memory machines.« less

  5. Designing Contestability: Interaction Design, Machine Learning, and Mental Health

    PubMed Central

    Hirsch, Tad; Merced, Kritzia; Narayanan, Shrikanth; Imel, Zac E.; Atkins, David C.

    2017-01-01

    We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making ML systems more accountable to human actors. PMID:28890949

  6. Reprint of: Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-06-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  7. Client interfaces to the Virtual Observatory Registry

    NASA Astrophysics Data System (ADS)

    Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.

    2015-04-01

    The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.

  8. IBM PC/IX operating system evaluation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  9. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  10. In Vitro Evaluation of a Program for Machine-Aided Indexing.

    ERIC Educational Resources Information Center

    Jacquemin, Christian; Daille, Beatrice; Royaute, Jean; Polanco, Xavier

    2002-01-01

    Presents the human evaluation of ILIAD, a program for machine-aided indexing that was designed to assist expert librarians in computer-aided indexing and document analysis. Topics include controlled indexing and free indexing; natural language and concept-based information retrieval; evaluation methodology; syntactic variations; and a comparison…

  11. The ZOG Technology Demonstration Project: A System Evaluation of USS CARL VINSON (CVN 70)

    DTIC Science & Technology

    1984-12-01

    part of a larger project involving development of a wide range of computer technologies, including artifcial intelligence and a long-range computer...shipboard manage- ment, aircraft management, expert systems, menu selection, man- machine interface, artificial intelligence , automation; shipboard It AWM...functions, planning, evaluation, training, hierarchical data bases The objective of this project was to conduct an evaluation of ZOG, a general purpose

  12. An Experience of Teaching for Learning by Observation: Remote-Controlled Experiments on Electrical Circuits

    ERIC Educational Resources Information Center

    Kong, Siu Cheung; Yeung, Yau Yuen; Wu, Xian Qiu

    2009-01-01

    In order to facilitate senior primary school students in Hong Kong to engage in learning by observation of the phenomena related to electrical circuits, a design of a specific courseware system, of which the interactive human-machine interface was created with the use of an open-source software called the LabVNC, for conducting online…

  13. Designing Microstructures/Structures for Desired Functional Material and Local Fields

    DTIC Science & Technology

    2015-12-02

    utilized to engineer multifunctional soft materials for multi-sensing, multi- actuating , human-machine interfaces. [3] Establish a theoretical framework...model for surface elasticity, (ii) derived a new type of Maxwell stress in soft materials due to quantum mechanical-elasticity coupling and...elucidated its ramification in engineering multifunctional soft materials, and (iii) demonstrated the possibility of concurrent magnetoelectricity and

  14. Media-Augmented Exercise Machines

    NASA Astrophysics Data System (ADS)

    Krueger, T.

    2002-01-01

    Cardio-vascular exercise has been used to mitigate the muscle and cardiac atrophy associated with adaptation to micro-gravity environments. Several hours per day may be required. In confined spaces and long duration missions this kind of exercise is inevitably repetitive and rapidly becomes uninteresting. At the same time, there are pressures to accomplish as much as possible given the cost- per-hour for humans occupying orbiting or interplanetary. Media augmentation provides a the means to overlap activities in time by supplementing the exercise with social, recreational, training or collaborative activities and thereby reducing time pressures. In addition, the machine functions as an interface to a wide range of digital environments allowing for spatial variety in an otherwise confined environment. We hypothesize that the adoption of media augmented exercise machines will have a positive effect on psycho-social well-being on long duration missions. By organizing and supplementing exercise machines, data acquisition hardware, computers and displays into an interacting system this proposal increases functionality with limited additional mass. This paper reviews preliminary work on a project to augment exercise equipment in a manner that addresses these issues and at the same time opens possibilities for additional benefits. A testbed augmented exercise machine uses a specialty built cycle trainer as both input to a virtual environment and as an output device from it using spatialized sound, and visual displays, vibration transducers and variable resistance. The resulting interactivity increases a sense of engagement in the exercise, provides a rich experience of the digital environments. Activities in the virtual environment and accompanying physiological and psychological indicators may be correlated to track and evaluate the health of the crew.

  15. Technology Roadmap Instrumentation, Control, and Human-Machine Interface to Support DOE Advanced Nuclear Energy Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donald D Dudenhoeffer; Burce P Hallbert

    Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less

  16. Combined Auditory and Vibrotactile Feedback for Human-Machine-Interface Control.

    PubMed

    Thorp, Elias B; Larson, Eric; Stepp, Cara E

    2014-01-01

    The purpose of this study was to determine the effect of the addition of binary vibrotactile stimulation to continuous auditory feedback (vowel synthesis) for human-machine interface (HMI) control. Sixteen healthy participants controlled facial surface electromyography to achieve 2-D targets (vowels). Eight participants used only real-time auditory feedback to locate targets whereas the other eight participants were additionally alerted to having achieved targets with confirmatory vibrotactile stimulation at the index finger. All participants trained using their assigned feedback modality (auditory alone or combined auditory and vibrotactile) over three sessions on three days and completed a fourth session on the third day using novel targets to assess generalization. Analyses of variance performed on the 1) percentage of targets reached and 2) percentage of trial time at the target revealed a main effect for feedback modality: participants using combined auditory and vibrotactile feedback performed significantly better than those using auditory feedback alone. No effect was found for session or the interaction of feedback modality and session, indicating a successful generalization to novel targets but lack of improvement over training sessions. Future research is necessary to determine the cognitive cost associated with combined auditory and vibrotactile feedback during HMI control.

  17. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  18. The JPL telerobot operator control station. Part 1: Hardware

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Tower, John T.; Hunka, George W.; Vansant, Glenn J.

    1989-01-01

    The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The hardware design, system architecture, and its integration and interface with the rest of the Telerobot Demonstrator System are discussed.

  19. Analysis and prediction of meal motion by EMG signals

    NASA Astrophysics Data System (ADS)

    Horihata, S.; Iwahara, H.; Yano, K.

    2007-12-01

    The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.

  20. Supervisory Control of Multiple Uninhabited Systems - Methodologies and Enabling Human-Robot Interface Technologies (Commande et surveillance de multiples systemes sans pilote - Methodologies et technologies habilitantes d’interfaces homme-machine)

    DTIC Science & Technology

    2012-12-01

    FRANCE 6.1 DATES SMAART (2006 – 2008) and SUSIE (2009 – 2011). 6.2 LOCATION Brest – Nancy – Paris (France). 6.3 SCENARIO/TASKS The setting...Agency (RTA), a dedicated staff with its headquarters in Neuilly, near Paris , France. In order to facilitate contacts with the military users and...Mission Delay for the Helicopter 8-12 Table 8-2 Assistant Interventions and Commander’s Reactions 8-13 Table 10-1 Partial LOA Matrix as Originally

  1. APPLICATION OF EYE TRACKING FOR MEASUREMENT AND EVALUATION IN HUMAN FACTORS STUDIES IN CONTROL ROOM MODERNIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovesdi, C.; Spielman, Z.; LeBlanc, K.

    An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collectmore » and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.« less

  2. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study.

    PubMed

    Laffont, Isabelle; Biard, Nicolas; Chalubert, Gérard; Delahoche, Laurent; Marhic, Bruno; Boyer, François C; Leroux, Christophe

    2009-10-01

    Laffont I, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, Leroux C. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Grasping robots are still difficult to use for persons with disabilities because of inadequate human-machine interfaces (HMIs). Our purpose was to evaluate the efficacy of a graphic interface enhanced by a panoramic camera to detect out-of-view objects and control a commercialized robotic grasping arm. Multicenter, open-label trial. Four French departments of physical and rehabilitation medicine. Control subjects (N=24; mean age, 33y) and 20 severely impaired patients (mean age, 44y; 5 with muscular dystrophies, 13 with traumatic tetraplegia, and 2 others) completed the study. None of these patients was able to grasp a 50-cL bottle without the robot. Participants were asked to grasp 6 objects scattered around their wheelchair using the robotic arm. They were able to select the desired object through the graphic interface available on their computer screen. Global success rate, time needed to select the object on the screen of the computer, number of clicks on the HMI, and satisfaction among users. We found a significantly lower success rate in patients (81.1% vs 88.7%; chi(2)P=.017). The duration of the task was significantly higher in patients (71.6s vs 39.1s; P<.001). We set a cut-off for the maximum duration at 79 seconds, representing twice the amount of time needed by the control subjects to complete the task. In these conditions, the success rate for the impaired participants was 65% versus 85.4% for control subjects. The mean number of clicks necessary to select the object with the HMI was very close in both groups: patients used (mean +/- SD) 7.99+/-6.07 clicks, whereas controls used 7.04+/-2.87 clicks. Considering the severity of patients' impairment, all these differences were considered tiny. Furthermore, a high satisfaction rate was reported for this population concerning the use of the graphic interface. The graphic interface is of interest in controlling robotic arms for disabled people, with numerous potential applications in daily life.

  3. Development and Implementation of a Simplified Tool Measuring System

    NASA Astrophysics Data System (ADS)

    Chen, Jenn-Yih; Lee, Bean-Yin; Lee, Kuang-Chyi; Chen, Zhao-Kai

    2010-01-01

    This paper presents a simplified system for measuring geometric profiles of end mills. Firstly, a CCD camera was used to capture images of cutting tools. Then, an image acquisition card with the encoding function was adopted to convert the source of image into an USB port of a PC, and the image could be shown on a monitor. In addition, two linear scales were mounted on the X-Y table for positioning and measuring purposes. The signals of the linear scales were transmitted into a 4-axis quadrature encoder with 4-channel counter card for position monitoring. The C++ Builder was utilized for designing the user friendly human machine interface of the measuring system of tools. There is a cross line on the image of the interface to show a coordinate for the position measurement. Finally, a well-known tool measuring and inspection machine was employed for the measuring standard. This study compares the difference of the measuring results by using the machine and the proposed system. Experimental results show that the percentage of measuring error is acceptable for some geometric parameters of the square or ball nose end mills. Therefore, the results demonstrate the effectiveness of the presented approach.

  4. Prosthetic EMG control enhancement through the application of man-machine principles

    NASA Technical Reports Server (NTRS)

    Simcox, W. A.

    1977-01-01

    An area in medicine that appears suitable to man-machine principles is rehabilitation research, particularly when the motor aspects of the body are involved. If one considers the limb, whether functional or not, as the machine, the brain as the controller and the neuromuscular system as the man-machine interface, the human body is reduced to a man-machine system that can benefit from the principles behind such systems. The area of rehabilitation that this paper deals with is that of an arm amputee and his prosthetic device. Reducing this area to its man-machine basics, the problem becomes one of attaining natural multiaxis prosthetic control using Electromyographic activity (EMG) as the means of communication between man and prothesis. In order to use EMG as the communication channel it must be amplified and processed to yield a high information signal suitable for control. The most common processing scheme employed is termed Mean Value Processing. This technique for extracting the useful EMG signal consists of a differential to single ended conversion to the surface activity followed by a rectification and smoothing.

  5. Low latency messages on distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Saltz, Joel

    1993-01-01

    Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.

  6. CDROM User Interface Evaluation: The Appropriateness of GUIs.

    ERIC Educational Resources Information Center

    Bosch, Victoria Manglano; Hancock-Beaulieu, Micheline

    1995-01-01

    Assesses the appropriateness of GUIs (graphical user interfaces), more specifically Windows-based interfaces for CD-ROM. An evaluation model is described that was developed to carry out an expert evaluation of the interfaces of seven CD-ROM products. Results are discussed in light of HCI (human-computer interaction) usability criteria and design…

  7. Automated visual imaging interface for the plant floor

    NASA Astrophysics Data System (ADS)

    Wutke, John R.

    1991-03-01

    The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.

  8. [A new machinability test machine and the machinability of composite resins for core built-up].

    PubMed

    Iwasaki, N

    2001-06-01

    A new machinability test machine especially for dental materials was contrived. The purpose of this study was to evaluate the effects of grinding conditions on machinability of core built-up resins using this machine, and to confirm the relationship between machinability and other properties of composite resins. The experimental machinability test machine consisted of a dental air-turbine handpiece, a control weight unit, a driving unit of the stage fixing the test specimen, and so on. The machinability was evaluated as the change in volume after grinding using a diamond point. Five kinds of core built-up resins and human teeth were used in this study. The machinabilities of these composite resins increased with an increasing load during grinding, and decreased with repeated grinding. There was no obvious correlation between the machinability and Vickers' hardness; however, a negative correlation was observed between machinability and scratch width.

  9. A brain-machine interface to navigate a mobile robot in a planar workspace: enabling humans to fly simulated aircraft with EEG.

    PubMed

    Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy

    2013-03-01

    This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.

  10. A Novel Feature Optimization for Wearable Human-Computer Interfaces Using Surface Electromyography Sensors

    PubMed Central

    Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen

    2018-01-01

    The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These experiments validate the feasibility of proposed real-time wearable HCI system and algorithms, providing a potential assistive device interface for persons with disabilities. PMID:29543737

  11. [The current state of the brain-computer interface problem].

    PubMed

    Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A

    2015-01-01

    It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.

  12. Development of sensitized pick coal interface detector system

    NASA Technical Reports Server (NTRS)

    Burchill, R. F.

    1982-01-01

    One approach for detection of the coal interface is measurement of pick cutting loads and shock through the use of pick strain gage load cells and accelerometers. The cutting drum of a long wall mining machine contains a number of cutting picks. In order to measure pick loads and shocks, one pick was instrumented and telemetry used to transmit the signals from the drum to an instrument-type tape recorder. A data system using FM telemetry was designed to transfer cutting bit load and shock information from the drum of a longwall shearer coal mining machine to a chassis mounted data recorder. The design of components in the test data system were finalized, the required instruments were assembled, the instrument system was evaluated in an above-ground simulation test, and an underground test series to obtain tape recorded sensor data was conducted.

  13. Detection and classification of tastants in vivo using a novel bioelectronic tongue in combination with brain-machine interface.

    PubMed

    Zhen Qin; Bin Zhang; Ning Hu; Ping Wang

    2015-01-01

    The mammalian gustatory system is acknowledged as one of the most valid chemosensing systems. The sense of taste particularly provides critical information about ingestion of toxic and noxious chemicals. Thus the potential of utilizing rats' gustatory system is investigated in detecting sapid substances. By recording electrical activities of neurons in gustatory cortex, a novel bioelectronic tongue system is developed in combination with brain-machine interface technology. Features are extracted in both spikes and local field potentials. By visualizing these features, classification is performed and the responses to different tastants can be prominently separated from each other. The results suggest that this in vivo bioelectronic tongue is capable of detecting tastants and will provide a promising platform for potential applications in evaluating palatability of food and beverages.

  14. Intelligent systems and advanced user interfaces for design, operation, and maintenance of command management systems

    NASA Technical Reports Server (NTRS)

    Potter, William J.; Mitchell, Christine M.

    1993-01-01

    Historically, command management systems (CMS) have been large and expensive spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as to develop a more generic CMS system. New technologies, in addition to a core CMS common to a range of spacecraft, may facilitate the training and enhance the efficiency of CMS operations. Current mission operations center (MOC) hardware and software include Unix workstations, the C/C++ programming languages, and an X window interface. This configuration provides the power and flexibility to support sophisticated and intelligent user interfaces that exploit state-of-the-art technologies in human-machine interaction, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of these issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, human-machine systems design and analysis tools (e.g., operator and designer models), and human-computer interaction tools (e.g., graphics, visualization, and animation) may provide significant savings in the design, operation, and maintenance of the CMS for a specific spacecraft as well as continuity for CMS design and development across spacecraft. The first six months of this research saw a broad investigation by Georgia Tech researchers into the function, design, and operation of current and planned command management systems at Goddard Space Flight Center. As the first step, the researchers attempted to understand the current and anticipated horizons of command management systems at Goddard. Preliminary results are given on CMS commonalities and causes of low re-use, and methods are proposed to facilitate increased re-use.

  15. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  16. Force reflecting hand controller

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas A. (Inventor); Snow, Edward R. (Inventor); Townsend, William T. (Inventor)

    1993-01-01

    A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs.

  17. Computation of the Distribution of the Fiber-Matrix Interface Cracks in the Edge Trimming of CFRP

    NASA Astrophysics Data System (ADS)

    Wang, Fu-ji; Zhang, Bo-yu; Ma, Jian-wei; Bi, Guang-jian; Hu, Hai-bo

    2018-04-01

    Edge trimming is commonly used to bring the CFRP components to right dimension and shape in aerospace industries. However, various forms of undesirable machining damage occur frequently which will significantly decrease the material performance of CFRP. The damage is difficult to predict and control due to the complicated changing laws, causing unsatisfactory machining quality of CFRP components. Since the most of damage has the same essence: the fiber-matrix interface cracks, this study aims to calculate the distribution of them in edge trimming of CFRP, thereby to obtain the effects of the machining parameters, which could be helpful to guide the optimal selection of the machining parameters in engineering. Through the orthogonal cutting experiments, the quantitative relation between the fiber-matrix interface crack depth and the fiber cutting angle, cutting depth as well as cutting speed is established. According to the analysis on material removal process on any location of the workpiece in edge trimming, the instantaneous cutting parameters are calculated, and the formation process of the fiber-matrix interface crack is revealed. Finally, the computational method for the fiber-matrix interface cracks in edge trimming of CFRP is proposed. Upon the computational results, it is found that the fiber orientations of CFRP workpieces is the most significant factor on the fiber-matrix interface cracks, which can not only change the depth of them from micrometers to millimeters, but control the distribution image of them. Other machining parameters, only influence the fiber-matrix interface cracks depth but have little effect on the distribution image.

  18. The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces.

    PubMed

    O'Shea, Daniel J; Trautmann, Eric; Chandrasekaran, Chandramouli; Stavisky, Sergey; Kao, Jonathan C; Sahani, Maneesh; Ryu, Stephen; Deisseroth, Karl; Shenoy, Krishna V

    2017-01-01

    A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. An EOG-Based Human-Machine Interface for Wheelchair Control.

    PubMed

    Huang, Qiyun; He, Shenghong; Wang, Qihong; Gu, Zhenghui; Peng, Nengneng; Li, Kai; Zhang, Yuandong; Shao, Ming; Li, Yuanqing

    2017-07-27

    Non-manual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG)-based HMI is proposed for wheelchair control. Thirteen flashing buttons are presented in the graphical user interface (GUI), and each of the buttons corresponds to a command. These buttons flash on a one-by-one manner in a pre-defined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. For healthy subjects/patients with spinal cord injuries (SCIs), the proposed HMI achieved an average accuracy of 96.7%/91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Using only one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. The proposed HMI provides a novel non-manual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR and fewer electrodes.

  20. Automatic Speech Recognition in Air Traffic Control: a Human Factors Perspective

    NASA Technical Reports Server (NTRS)

    Karlsson, Joakim

    1990-01-01

    The introduction of Automatic Speech Recognition (ASR) technology into the Air Traffic Control (ATC) system has the potential to improve overall safety and efficiency. However, because ASR technology is inherently a part of the man-machine interface between the user and the system, the human factors issues involved must be addressed. Here, some of the human factors problems are identified and related methods of investigation are presented. Research at M.I.T.'s Flight Transportation Laboratory is being conducted from a human factors perspective, focusing on intelligent parser design, presentation of feedback, error correction strategy design, and optimal choice of input modalities.

  1. Man-machine interface analysis of the flight design system

    NASA Technical Reports Server (NTRS)

    Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.

    1978-01-01

    The objective of the current effort was to perform a broad analysis of the human factors issues involved in the design of the Flight Design System (FDS). The analysis was intended to include characteristics of the system itself, such as: (1) basic structure and functional capabilities of FDS; (2) user backgrounds, capabilities, and possible modes of use; (3) FDS interactive dialogue, problem solving aids; (4) system data management capabilities; and to include, as well, such system related matters as: (1) flight design team structure; (2) roles of technicians; (3) user training; and (4) methods of evaluating system performance. Wherever possible, specific recommendations are made. In other cases, the issues which seem most important are identified. In some cases, additional analyses or experiments which might provide resolution are suggested.

  2. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma M.; Hendrix, Val; Chertkov, Michael

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less

  3. Taking Over Control From Highly Automated Vehicles in Complex Traffic Situations: The Role of Traffic Density.

    PubMed

    Gold, Christian; Körber, Moritz; Lechner, David; Bengler, Klaus

    2016-06-01

    The aim of this study was to quantify the impact of traffic density and verbal tasks on takeover performance in highly automated driving. In highly automated vehicles, the driver has to occasionally take over vehicle control when approaching system limits. To ensure safety, the ability of the driver to regain control of the driving task under various driving situations and different driver states needs to be quantified. Seventy-two participants experienced takeover situations requiring an evasive maneuver on a three-lane highway with varying traffic density (zero, 10, and 20 vehicles per kilometer). In a between-subjects design, half of the participants were engaged in a verbal 20-Questions Task, representing speaking on the phone while driving in a highly automated vehicle. The presence of traffic in takeover situations led to longer takeover times and worse takeover quality in the form of shorter time to collision and more collisions. The 20-Questions Task did not influence takeover time but seemed to have minor effects on the takeover quality. For the design and evaluation of human-machine interaction in takeover situations of highly automated vehicles, the traffic state seems to play a major role, compared to the driver state, manipulated by the 20-Questions Task. The present results can be used by developers of highly automated systems to appropriately design human-machine interfaces and to assess the driver's time budget for regaining control. © 2016, Human Factors and Ergonomics Society.

  4. Experimental Characterization and Modeling of Thermal Contact Resistance of Electric Machine Stator-to-Cooling Jacket Interface Under Interference Fit Loading

    DOE PAGES

    Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor; ...

    2018-05-08

    Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less

  5. Experimental Characterization and Modeling of Thermal Contact Resistance of Electric Machine Stator-to-Cooling Jacket Interface Under Interference Fit Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor

    Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less

  6. The desktop interface in intelligent tutoring systems

    NASA Technical Reports Server (NTRS)

    Baudendistel, Stephen; Hua, Grace

    1987-01-01

    The interface between an Intelligent Tutoring System (ITS) and the person being tutored is critical to the success of the learning process. If the interface to the ITS is confusing or non-supportive of the tutored domain, the effectiveness of the instruction will be diminished or lost entirely. Consequently, the interface to an ITS should be highly integrated with the domain to provide a robust and semantically rich learning environment. In building an ITS for ZetaLISP on a LISP Machine, a Desktop Interface was designed to support a programming learning environment. Using the bitmapped display, windows, and mouse, three desktops were designed to support self-study and tutoring of ZetaLISP. Through organization, well-defined boundaries, and domain support facilities, the desktops provide substantial flexibility and power for the student and facilitate learning ZetaLISP programming while screening the student from the complex LISP Machine environment. The student can concentrate on learning ZetaLISP programming and not on how to operate the interface or a LISP Machine.

  7. Intravascular Neural Interface with Nanowire Electrode

    PubMed Central

    Watanabe, Hirobumi; Takahashi, Hirokazu; Nakao, Masayuki; Walton, Kerry; Llinás, Rodolfo R.

    2010-01-01

    Summary A minimally invasive electrical recording and stimulating technique capable of simultaneously monitoring the activity of a significant number (e.g., 103 to 104) of neurons is an absolute prerequisite in developing an effective brain–machine interface. Although there are many excellent methodologies for recording single or multiple neurons, there has been no methodology for accessing large numbers of cells in a behaving experimental animal or human individual. Brain vascular parenchyma is a promising candidate for addressing this problem. It has been proposed [1, 2] that a multitude of nanowire electrodes introduced into the central nervous system through the vascular system to address any brain area may be a possible solution. In this study we implement a design for such microcatheter for ex vivo experiments. Using Wollaston platinum wire, we design a submicron-scale electrode and develop a fabrication method. We then evaluate the mechanical properties of the electrode in a flow when passing through the intricacies of the capillary bed in ex vivo Xenopus laevis experiments. Furthermore, we demonstrate the feasibility of intravascular recording in the spinal cord of Xenopus laevis. PMID:21572940

  8. What makes an automated teller machine usable by blind users?

    PubMed

    Manzke, J M; Egan, D H; Felix, D; Krueger, H

    1998-07-01

    Fifteen blind and sighted subjects, who featured as a control group for acceptance, were asked for their requirements for automated teller machines (ATMs). Both groups also tested the usability of a partially operational ATM mock-up. This machine was based on an existing cash dispenser, providing natural speech output, different function menus and different key arrangements. Performance and subjective evaluation data of blind and sighted subjects were collected. All blind subjects were able to operate the ATM successfully. The implemented speech output was the main usability factor for them. The different interface designs did not significantly affect performance and subjective evaluation. Nevertheless, design recommendations can be derived from the requirement assessment. The sighted subjects were rather open for design modifications, especially the implementation of speech output. However, there was also a mismatch of the requirements of the two subject groups, mainly concerning the key arrangement.

  9. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

  10. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    PubMed

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.

  11. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources

    PubMed Central

    Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734

  12. Agent-based human-robot interaction of a combat bulldozer

    NASA Astrophysics Data System (ADS)

    Granot, Reuven; Feldman, Maxim

    2004-09-01

    A small-scale supervised autonomous bulldozer in a remote site was developed to experience agent based human intervention. The model is based on Lego Mindstorms kit and represents combat equipment, whose job performance does not require high accuracy. The model enables evaluation of system response for different operator interventions, as well as for a small colony of semiautonomous dozers. The supervising human may better react than a fully autonomous system to unexpected contingent events, which are a major barrier to implement full autonomy. The automation is introduced as improved Man Machine Interface (MMI) by developing control agents as intelligent tools to negotiate between human requests and task level controllers as well as negotiate with other elements of the software environment. Current UGVs demand significant communication resources and constant human operation. Therefore they will be replaced by semi-autonomous human supervisory controlled systems (telerobotic). For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This transition should take care about the imperfections, which are responsible for the improper operation of the robot, by disconnecting or adapting them to the new situation. Preliminary conclusions from the small-scale experiments are presented.

  13. Battery electric vehicles - implications for the driver interface.

    PubMed

    Neumann, Isabel; Krems, Josef F

    2016-03-01

    The current study examines the human-machine interface of a battery electric vehicle (BEV) from a user-perspective, focussing on the evaluation of BEV-specific displays, the relevance of provided information and challenges for drivers due to the concept of electricity in a road vehicle. A sample of 40 users drove a BEV for 6 months. Data were gathered at three points of data collection. Participants perceived the BEV-specific displays as only moderately reliable and helpful for estimating the displayed parameters. This was even less the case after driving the BEV for 3 months. A taxonomy of user requirements was compiled revealing the need for improved and additional information, especially regarding energy consumption and efficiency. Drivers had difficulty understanding electrical units and the energy consumption of the BEV. On the background of general principles for display design, results provide implications how to display relevant information and how to facilitate drivers' understanding of energy consumption in BEVs. Practitioner Summary: Battery electric vehicle (BEV) displays need to incorporate new information. A taxonomy of user requirements was compiled revealing the need for improved and additional information in the BEV interface. Furthermore, drivers had trouble understanding electrical units and energy consumption; therefore, appropriate assistance is required. Design principles which are specifically important in the BEV context are discussed.

  14. Graphical user interfaces for symbol-oriented database visualization and interaction

    NASA Astrophysics Data System (ADS)

    Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger

    1997-04-01

    In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.

  15. Assessing Advanced Technology in CENATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Barker, Kevin J.; Gioiosa, Roberto

    PNNL's Center for Advanced Technology Evaluation (CENATE) is a new U.S. Department of Energy center whose mission is to assess and facilitate access to emerging computing technology. CENATE is assessing a range of advanced technologies, from evolutionary to disruptive. Technologies of interest include the processor socket (homogeneous and accelerated systems), memories (dynamic, static, memory cubes), motherboards, networks (network interface cards and switches), and input/output and storage devices. CENATE is developing a multi-perspective evaluation process based on integrating advanced system instrumentation, performance measurements, and modeling and simulation. We show evaluations of two emerging network technologies: silicon photonics interconnects and the Datamore » Vortex network. CENATE's evaluation also addresses the question of which machine is best for a given workload under certain constraints. We show a performance-power tradeoff analysis of a well-known machine learning application on two systems.« less

  16. OMV mission simulator

    NASA Technical Reports Server (NTRS)

    Cok, Keith E.

    1989-01-01

    The Orbital Maneuvering Vehicle (OMV) will be remotely piloted during rendezvous, docking, or proximity operations with target spacecraft from a ground control console (GCC). The real-time mission simulator and graphics being used to design a console pilot-machine interface are discussed. A real-time orbital dynamics simulator drives the visual displays. The dynamics simulator includes a J2 oblate earth gravity model and a generalized 1962 rotating atmospheric and drag model. The simulator also provides a variable-length communication delay to represent use of the Tracking and Data Relay Satellite System (TDRSS) and NASA Communications (NASCOM). Input parameter files determine the graphics display. This feature allows rapid prototyping since displays can be easily modified from pilot recommendations. A series of pilot reviews are being held to determine an effective pilot-machine interface. Pilots fly missions with nominal to 3-sigma dispersions in translational or rotational axes. Console dimensions, switch type and layout, hand controllers, and graphic interfaces are evaluated by the pilots and the GCC simulator is modified for subsequent runs. Initial results indicate a pilot preference for analog versus digital displays and for two 3-degree-of-freedom hand controllers.

  17. Advances in neuroprosthetic learning and control.

    PubMed

    Carmena, Jose M

    2013-01-01

    Significant progress has occurred in the field of brain-machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action.

  18. Advances in Neuroprosthetic Learning and Control

    PubMed Central

    Carmena, Jose M.

    2013-01-01

    Significant progress has occurred in the field of brain–machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action. PMID:23700383

  19. Toward an autonomous brain machine interface: integrating sensorimotor reward modulation and reinforcement learning.

    PubMed

    Marsh, Brandi T; Tarigoppula, Venkata S Aditya; Chen, Chen; Francis, Joseph T

    2015-05-13

    For decades, neurophysiologists have worked on elucidating the function of the cortical sensorimotor control system from the standpoint of kinematics or dynamics. Recently, computational neuroscientists have developed models that can emulate changes seen in the primary motor cortex during learning. However, these simulations rely on the existence of a reward-like signal in the primary sensorimotor cortex. Reward modulation of the primary sensorimotor cortex has yet to be characterized at the level of neural units. Here we demonstrate that single units/multiunits and local field potentials in the primary motor (M1) cortex of nonhuman primates (Macaca radiata) are modulated by reward expectation during reaching movements and that this modulation is present even while subjects passively view cursor motions that are predictive of either reward or nonreward. After establishing this reward modulation, we set out to determine whether we could correctly classify rewarding versus nonrewarding trials, on a moment-to-moment basis. This reward information could then be used in collaboration with reinforcement learning principles toward an autonomous brain-machine interface. The autonomous brain-machine interface would use M1 for both decoding movement intention and extraction of reward expectation information as evaluative feedback, which would then update the decoding algorithm as necessary. In the work presented here, we show that this, in theory, is possible. Copyright © 2015 the authors 0270-6474/15/357374-14$15.00/0.

  20. Insect-machine interface based neurocybernetics.

    PubMed

    Bozkurt, Alper; Gilmour, Robert F; Sinha, Ayesa; Stern, David; Lal, Amit

    2009-06-01

    We present details of a novel bioelectric interface formed by placing microfabricated probes into insect during metamorphic growth cycles. The inserted microprobes emerge with the insect where the development of tissue around the electronics during the pupal development allows mechanically stable and electrically reliable structures coupled to the insect. Remarkably, the insects do not react adversely or otherwise to the inserted electronics in the pupae stage, as is true when the electrodes are inserted in adult stages. We report on the electrical and mechanical characteristics of this novel bioelectronic interface, which we believe would be adopted by many investigators trying to investigate biological behavior in insects with negligible or minimal traumatic effect encountered when probes are inserted in adult stages. This novel insect-machine interface also allows for hybrid insect-machine platforms for further studies. As an application, we demonstrate our first results toward navigation of flight in moths. When instrumented with equipment to gather information for environmental sensing, such insects potentially can assist man to monitor the ecosystems that we share with them for sustainability. The simplicity of the optimized surgical procedure we invented allows for batch insertions to the insect for automatic and mass production of such hybrid insect-machine platforms. Therefore, our bioelectronic interface and hybrid insect-machine platform enables multidisciplinary scientific and engineering studies not only to investigate the details of insect behavioral physiology but also to control it.

  1. Multivariate Models for Prediction of Human Skin Sensitization ...

    EPA Pesticide Factsheets

    One of the lnteragency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens TM assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches , logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine

  2. Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma

    PubMed Central

    Wrzeszczynski, Kazimierz O.; Frank, Mayu O.; Koyama, Takahiko; Rhrissorrakrai, Kahn; Robine, Nicolas; Utro, Filippo; Emde, Anne-Katrin; Chen, Bo-Juen; Arora, Kanika; Shah, Minita; Vacic, Vladimir; Norel, Raquel; Bilal, Erhan; Bergmann, Ewa A.; Moore Vogel, Julia L.; Bruce, Jeffrey N.; Lassman, Andrew B.; Canoll, Peter; Grommes, Christian; Harvey, Steve; Parida, Laxmi; Michelini, Vanessa V.; Zody, Michael C.; Jobanputra, Vaidehi; Royyuru, Ajay K.

    2017-01-01

    Objective: To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each. Methods: Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs. Results: More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts. Conclusions: The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible. ClinicalTrials.gov identifier: NCT02725684. PMID:28740869

  3. Mimicking Neurotransmitter Release in Chemical Synapses via Hysteresis Engineering in MoS2 Transistors.

    PubMed

    Arnold, Andrew J; Razavieh, Ali; Nasr, Joseph R; Schulman, Daniel S; Eichfeld, Chad M; Das, Saptarshi

    2017-03-28

    Neurotransmitter release in chemical synapses is fundamental to diverse brain functions such as motor action, learning, cognition, emotion, perception, and consciousness. Moreover, improper functioning or abnormal release of neurotransmitter is associated with numerous neurological disorders such as epilepsy, sclerosis, schizophrenia, Alzheimer's disease, and Parkinson's disease. We have utilized hysteresis engineering in a back-gated MoS 2 field effect transistor (FET) in order to mimic such neurotransmitter release dynamics in chemical synapses. All three essential features, i.e., quantal, stochastic, and excitatory or inhibitory nature of neurotransmitter release, were accurately captured in our experimental demonstration. We also mimicked an important phenomenon called long-term potentiation (LTP), which forms the basis of human memory. Finally, we demonstrated how to engineer the LTP time by operating the MoS 2 FET in different regimes. Our findings could provide a critical component toward the design of next-generation smart and intelligent human-like machines and human-machine interfaces.

  4. A Comparison of the Unpressurized Rover and Small Pressurized Rover During a Desert Field Evaluation

    NASA Technical Reports Server (NTRS)

    Litaker, Harry; Thompson, Shelby; Howard, Robert

    2009-01-01

    To effectively explore the lunar surface, astronauts will need a transportation vehicle which can traverse all types of terrain. Currently, the National Aeronautics and Space Administration s (NASA) is investigating two lunar rover configurations to meet such a requirement. Under the Lunar Electric Rover (LER) project, a comparison study between the unpressurized rover (UPR) and the small pressurized rover (SPR) was conducted at the Black Point Lava Flow in Arizona. The objective of the study was to obtain human-in-the-loop performance data on the vehicles with respect to human-machine interfaces, vehicle impacts on crew productivity, and scientific observations. Four male participants took part in four, one-day field tests using the exact same terrain and scientific sites for an accurate comparison between vehicle configurations. Subjective data was collected using several human factors performance measures. Results indicate either vehicle configuration was generally acceptable for a lunar mission; however, the SPR configuration was preferred over the UPR configuration primarily for the SPR s ability to cause less fatigue and enabling greater crew productivity.

  5. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  6. A flexible skin piloerection monitoring sensor

    NASA Astrophysics Data System (ADS)

    Kim, Jaemin; Seo, Dae Geon; Cho, Young-Ho

    2014-06-01

    We have designed, fabricated, and tested a capacitive-type flexible micro sensor for measurement of the human skin piloerection arisen from sudden emotional and environmental change. The present skin piloerection monitoring methods are limited in objective and quantitative measurement by physical disturbance stimulation to the skin due to bulky size and heavy weight of measuring devices. The proposed flexible skin piloerection monitoring sensor is composed of 3 × 3 spiral coplanar capacitor array using conductive polymer for having high capacitive density and thin enough thickness to be attached to human skin. The performance of the skin piloerection monitoring sensor is characterized using the artificial bump, representing human skin goosebump; thus, resulting in the sensitivity of -0.00252%/μm and the nonlinearity of 25.9% for the artificial goosebump deformation in the range of 0-326 μm. We also verified successive human skin piloerection having 3.5 s duration on the subject's dorsal forearms, thus resulting in the capacitance change of -6.2 fF and -9.2 fF for the piloerection intensity of 145 μm and 194 μm, respectively. It is demonstrated experimentally that the proposed sensor is capable to measure the human skin piloerection objectively and quantitatively, thereby suggesting the quantitative evaluation method of the qualitative human emotional status for cognitive human-machine interfaces applications.

  7. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    PubMed

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  8. Space applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS). Volume 4: Application of ARAMIS capabilities to space project functional elements

    NASA Technical Reports Server (NTRS)

    Miller, R. H.; Minsky, M. L.; Smith, D. B. S.

    1982-01-01

    Applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities and their related ground support functions are studied, so that informed decisions can be made on which aspects of ARAMIS to develop. The specific tasks which will be required by future space project tasks are identified and the relative merits of these options are evaluated. The ARAMIS options defined and researched span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.

  9. Application of the user-centred design process according ISO 9241-210 in air traffic control.

    PubMed

    König, Christina; Hofmann, Thomas; Bruder, Ralph

    2012-01-01

    Designing a usable human machine interface for air traffic control is challenging and should follow approved methods. The ISO 9241-210 standard promises high usability of products by integrating future users and following an iterative process. This contribution describes the proceeding and first results of the analysis and application of ISO 9241-210 to develop a planning tool for air traffic controllers.

  10. Stretchable human-machine interface based on skin-conformal sEMG electrodes with self-similar geometry

    NASA Astrophysics Data System (ADS)

    Dong, Wentao; Zhu, Chen; Hu, Wei; Xiao, Lin; Huang, Yong'an

    2018-01-01

    Current stretchable surface electrodes have attracted increasing attention owing to their potential applications in biological signal monitoring, wearable human-machine interfaces (HMIs) and the Internet of Things. The paper proposed a stretchable HMI based on a surface electromyography (sEMG) electrode with a self-similar serpentine configuration. The sEMG electrode was transfer-printed onto the skin surface conformally to monitor biological signals, followed by signal classification and controlling of a mobile robot. Such electrodes can bear rather large deformation (such as >30%) under an appropriate areal coverage. The sEMG electrodes have been used to record electrophysiological signals from different parts of the body with sharp curvature, such as the index finger, back of the neck and face, and they exhibit great potential for HMI in the fields of robotics and healthcare. The electrodes placed onto the two wrists would generate two different signals with the fist clenched and loosened. It is classified to four kinds of signals with a combination of the gestures from the two wrists, that is, four control modes. Experiments demonstrated that the electrodes were successfully used as an HMI to control the motion of a mobile robot remotely. Project supported by the National Natural Science Foundation of China (Nos. 51635007, 91323303).

  11. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  12. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  13. From pilot's associate to satellite controller's associate

    NASA Technical Reports Server (NTRS)

    Neyland, David L.; Lizza, Carl; Merkel, Philip A.

    1992-01-01

    Associate technology is an emerging engineering discipline wherein intelligent automation can significantly augment the performance of man-machine systems. An associate system is one that monitors operator activity and adapts its operational behavior accordingly. Associate technology is most effectively applied when mapped into management of the human-machine interface and display-control loop in typical manned systems. This paper addresses the potential for application of associate technology into the arena of intelligent command and control of satellite systems, from diagnosis of onboard and onground of satellite systems fault conditions, to execution of nominal satellite control functions. Rather than specifying a specific solution, this paper draws parallels between the Pilot's Associate concept and the domain of satellite control.

  14. Kinematic design to improve ergonomics in human machine interaction.

    PubMed

    Schiele, André; van der Helm, Frans C T

    2006-12-01

    This paper introduces a novel kinematic design paradigm for ergonomic human machine interaction. Goals for optimal design are formulated generically and applied to the mechanical design of an upper-arm exoskeleton. A nine degree-of-freedom (DOF) model of the human arm kinematics is presented and used to develop, test, and optimize the kinematic structure of an human arm interfacing exoskeleton. The resulting device can interact with an unprecedented portion of the natural limb workspace, including motions in the shoulder-girdle, shoulder, elbow, and the wrist. The exoskeleton does not require alignment to the human joint axes, yet is able to actuate each DOF of our redundant limb unambiguously and without reaching into singularities. The device is comfortable to wear and does not create residual forces if misalignments exist. Implemented in a rehabilitation robot, the design features of the exoskeleton could enable longer lasting training sessions, training of fully natural tasks such as activities of daily living and shorter dress-on and dress-off times. Results from inter-subject experiments with a prototype are presented, that verify usability over the entire workspace of the human arm, including shoulder and shoulder girdle.

  15. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  16. Development of the FITS tools package for multiple software environments

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; Blackburn, J. K.

    1992-01-01

    The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.

  17. Skill acquisition while operating in-vehicle information systems: interface design determines the level of safety-relevant distractions.

    PubMed

    Jahn, Georg; Krems, Josef F; Gelau, Christhard

    2009-04-01

    This study tested whether the ease of learning to use human-machine interfaces of in-vehicle information systems (IVIS) can be assessed at standstill. Assessing the attentional demand of IVIS should include an evaluation of ease of learning, because the use of IVIS at low skill levels may create safety-relevant distractions. Skill acquisition in operating IVIS was quantified by fitting the power law of practice to training data sets collected in a driving study and at standstill. Participants practiced manual destination entry with two route guidance systems differing in cognitive demand. In Experiment 1, a sample of middle-aged participants was trained while steering routes of varying driving demands. In Experiment 2, another sample of middle-aged participants was trained at standstill. In Experiment 1, display glance times were less affected by driving demands than by total task times and decreased at slightly higher speed-up rates (0.02 higher on average) than task times collected at standstill in Experiment 2. The system interface that minimized cognitive demand was operated more quickly and was easier to learn. Its system delays increased static task times, which still predicted 58% of variance in display glance times compared with even 76% for the second system. The ease of learning to use an IVIS interface and the decrease in attentional demand with training can be assessed at standstill. Fitting the power law of practice to static task times yields parameters that predict display glance times while driving, which makes it possible to compare interfaces with regard to ease of learning.

  18. An assisted navigation training framework based on judgment theory using sparse and discrete human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano

    2009-01-01

    This paper aims to present a new framework to train people with severe motor disabilities steering an assisted mobile robot (AMR), such as a powered wheelchair. Users with high level of motor disabilities are not able to use standard HMIs, which provide a continuous command signal (e. g. standard joystick). For this reason HMIs providing a small set of simple commands, which are sparse and discrete in time must be used (e. g. scanning interface, or brain computer interface), making very difficult to steer the AMR. In this sense, the assisted navigation training framework (ANTF) is designed to train users driving the AMR, in indoor structured environments, using this type of HMIs. Additionally it provides user characterization on steering the robot, which will later be used to adapt the AMR navigation system to human competence steering the AMR. A rule-based lens (RBL) model is used to characterize users on driving the AMR. Individual judgment performance choosing the best manoeuvres is modeled using a genetic-based policy capturing (GBPC) technique characterized to infer non-compensatory judgment strategies from human decision data. Three user models, at three different learning stages, using the RBL paradigm, are presented.

  19. Formal analysis and automatic generation of user interfaces: approach, methodology, and an algorithm.

    PubMed

    Heymann, Michael; Degani, Asaf

    2007-04-01

    We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.

  20. Application of earth resources technology satellite data to urban and regional planning: Test site, County of Los Angeles

    NASA Technical Reports Server (NTRS)

    Raje, S.; Mcknight, J.; Willoughby, G.; Economy, R. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The County of Los Angeles photointerpreted ERTS film products to define problems of interest, coordinated ground truth over the complex test site including interfaces with secondary users as well as participated in on-line analyses of the GE multispectral information extraction systems. Interactive machine analyses were carried out, developing techniques and procedures as well as evaluating the outputs for community and regional planning. Extensive aircraft underflight coverage was provided that was valuable both in inputs preparation and outputs evaluation of the machine-aided analyses. One of the nonstandard ERTS images led to the discovery of a major new fault lineament on the northern slope of the Santa Monica Mountains.

  1. Flying Unmanned Aircraft: A Pilot's Perspective

    NASA Technical Reports Server (NTRS)

    Pestana, Mark E.

    2011-01-01

    The National Aeronautics and Space Administration (NASA) is pioneering various Unmanned Aircraft System (UAS) technologies and procedures which may enable routine access to the National Airspace System (NAS), with an aim for Next Gen NAS. These tools will aid in the development of technologies and integrated capabilities that will enable high value missions for science, security, and defense, and open the door to low-cost, extreme-duration, stratospheric flight. A century of aviation evolution has resulted in accepted standards and best practices in the design of human-machine interfaces, the displays and controls of which serve to optimize safe and efficient flight operations and situational awareness. The current proliferation of non-standard, aircraft-specific flight crew interfaces in UAS, coupled with the inherent limitations of operating UAS without in-situ sensory input and feedback (aural, visual, and vestibular cues), has increased the risk of mishaps associated with the design of the "cockpit." The examples of current non- or sub- standard design features range from "annoying" and "inefficient", to those that are difficult to manipulate or interpret in a timely manner, as well as to those that are "burdensome" and "unsafe." A concerted effort is required to establish best practices and standards for the human-machine interfaces, for the pilot as well as the air traffic controller. In addition, roles, responsibilities, knowledge, and skill sets are subject to redefining the terms, "pilot" and "air traffic controller", with respect to operating UAS, especially in the Next-Gen NAS. The knowledge, skill sets, training, and qualification standards for UAS operations must be established, and reflect the aircraft-specific human-machine interfaces and control methods. NASA s recent experiences flying its MQ-9 Ikhana in the NAS for extended duration, has enabled both NASA and the FAA to realize the full potential for UAS, as well as understand the implications of current limitations. Ikhana is a Predator-B/Reaper UAS, built by General Atomics, Aeronautical Systems, Inc., and modified for research. Since 2007, the aircraft has been flown seasonally with a wing-mounted pod containing an infrared scanner, utilized to provide real-time wildfire geo-location data to various fire-fighting agencies in the western U.S. The multi-agency effort included an extensive process to obtain flight clearance from the FAA to operate under special provisions, given that UAS in general do not fully comply with current airspace regulations (e.g. sense-and-avoid requirements).

  2. Development of a systems theoretical procedure for evaluation of the work organization of the cockpit crew of a civil transport airplane

    NASA Technical Reports Server (NTRS)

    Fricke, M.; Vees, C.

    1983-01-01

    To achieve optimum design for the man machine interface with aircraft, a description of the interaction and work organization of the cockpit crew is needed. The development of system procedure to evaluate the work organization of pilots while structuring the work process is examined. Statistical data are needed to simulate sequences of pilot actions on the computer. Investigations of computer simulation and applicability for evaluation of crew concepts are discussed.

  3. A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    PubMed Central

    Corti, Kevin; Gillespie, Alex

    2015-01-01

    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066

  4. Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.

    PubMed

    McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A

    2017-06-28

    To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.

  5. Evaluation of display technologies for Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Sabo, Julia; Fegert, Tobias; Cisowski, Matthäus Stephanus; Marsal, Anatolij; Eichberger, Domenik; Blankenbach, Karlheinz

    2017-02-01

    Internet of Things (IoT) is a booming industry. We investigated several (semi-) professional IoT devices in combination with displays (focus on reflective technologies) and LEDs. First, these displays were compared for reflectance and ambient light performance. Two measurement set-ups with diffuse conditions were used for simulating typical indoor lighting conditions of IoT displays. E-paper displays were evaluated best as they combine a relative high reflectance with large contrast ratio. Reflective monochrome LCDs show a lower reflectance but are widely available. Second we studied IoT microprocessors interfaces to displays. A µP can drive single LEDs and one or two Seg 8 LED digits directly by GPIOs. Other display technologies require display controllers with a parallel or serial interface to the microprocessor as they need dedicated waveforms for driving the pixels. Most suitable are display modules with built-in display RAM as only pixel data have to be transferred which changes. A HDMI output (e.g. Raspberry Pi) results in high cost for the displays, therefore AMLCDs are not suitable for low to medium cost IoT systems. We compared and evaluated furthermore status indicators, icons, text and graphics IoT display systems regarding human machine interface (HMI) characteristics and effectiveness as well as power consumption. We found out that low resolution graphics bistable e-paper displays are the most appropriate display technology for IoT systems as they show as well information after a power failure or power switch off during maintenance or e.g. QR codes for installation. LED indicators are the most cost effective approach which has however very limited HMI capabilities.

  6. Channelized relevance vector machine as a numerical observer for cardiac perfusion defect detection task

    NASA Astrophysics Data System (ADS)

    Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.

    2011-03-01

    In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.

  7. KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1985-01-01

    Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.

  8. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  9. Data storage technology: Hardware and software, Appendix B

    NASA Technical Reports Server (NTRS)

    Sable, J. D.

    1972-01-01

    This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.

  10. Developing Lathing Parameters for PBX 9501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodrum, Randall Brock

    This thesis presents the work performed on lathing PBX 9501 to gather and analyze cutting force and temperature data during the machining process. This data will be used to decrease federal-regulation-constrained machining time of the high explosive PBX 9501. The effects of machining parameters depth of cut, surface feet per minute, and inches per revolution on cutting force and cutting interface were evaluated. Cutting tools of tip radius 0.005 -inches and 0.05 -inches were tested to determine what effect the tool shape had on the machining process as well. A consistently repeatable relationship of temperature to changing depth of cutmore » and surface feet per minute is found, while only a weak dependence was found to changing inches per revolution. Results also show the relation of cutting force to depth of cut and inches per revolution, while weak dependence on SFM is found. Conclusions suggest rapid, shallow cuts optimize machining time for a billet of PBX 9501, while minimizing temperature increase and cutting force.« less

  11. Towards a genetics-based adaptive agent to support flight testing

    NASA Astrophysics Data System (ADS)

    Cribbs, Henry Brown, III

    Although the benefits of aircraft simulation have been known since the late 1960s, simulation almost always entails interaction with a human test pilot. This "pilot-in-the-loop" simulation process provides useful evaluative information to the aircraft designer and provides a training tool to the pilot. Emulation of a pilot during the early phases of the aircraft design process might provide designers a useful evaluative tool. Machine learning might emulate a pilot in a simulated aircraft/cockpit setting. Preliminary work in the application of machine learning techniques, such as reinforcement learning, to aircraft maneuvering have shown promise. These studies used simplified interfaces between machine learning agent and the aircraft simulation. The simulations employed low order equivalent system models. High-fidelity aircraft simulations exist, such as the simulations developed by NASA at its Dryden Flight Research Center. To expand the applicational domain of reinforcement learning to aircraft designs, this study presents a series of experiments that examine a reinforcement learning agent in the role of test pilot. The NASA X-31 and F-106 high-fidelity simulations provide realistic aircraft for the agent to maneuver. The approach of the study is to examine an agent possessing a genetic-based, artificial neural network to approximate long-term, expected cost (Bellman value) in a basic maneuvering task. The experiments evaluate different learning methods based on a common feedback function and an identical task. The learning methods evaluated are: Q-learning, Q(lambda)-learning, SARSA learning, and SARSA(lambda) learning. Experimental results indicate that, while prediction error remain quite high, similar, repeatable behaviors occur in both aircraft. Similar behavior exhibits portability of the agent between aircraft with different handling qualities (dynamics). Besides the adaptive behavior aspects of the study, the genetic algorithm used in the agent is shown to play an additive role in the shaping of the artificial neural network to the prediction task.

  12. Three-dimensional virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.

  13. Space Applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS). Volume 1: Executive Summary

    NASA Technical Reports Server (NTRS)

    Miller, R. H.; Minsky, M. L.; Smith, D. B. S.

    1982-01-01

    Potential applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities, and to their related ground support functions are explored. The specific tasks which will be required by future space projects are identified. ARAMIS options which are candidates for those space project tasks and the relative merits of these options are defined and evaluated. Promising applications of ARAMIS and specific areas for further research are identified. The ARAMIS options defined and researched by the study group span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.

  14. Neural Coding for Effective Rehabilitation

    PubMed Central

    2014-01-01

    Successful neurological rehabilitation depends on accurate diagnosis, effective treatment, and quantitative evaluation. Neural coding, a technology for interpretation of functional and structural information of the nervous system, has contributed to the advancements in neuroimaging, brain-machine interface (BMI), and design of training devices for rehabilitation purposes. In this review, we summarized the latest breakthroughs in neuroimaging from microscale to macroscale levels with potential diagnostic applications for rehabilitation. We also reviewed the achievements in electrocorticography (ECoG) coding with both animal models and human beings for BMI design, electromyography (EMG) interpretation for interaction with external robotic systems, and robot-assisted quantitative evaluation on the progress of rehabilitation programs. Future rehabilitation would be more home-based, automatic, and self-served by patients. Further investigations and breakthroughs are mainly needed in aspects of improving the computational efficiency in neuroimaging and multichannel ECoG by selection of localized neuroinformatics, validation of the effectiveness in BMI guided rehabilitation programs, and simplification of the system operation in training devices. PMID:25258708

  15. Objective evaluation of situation awareness for dynamic decision makers in teleoperations

    NASA Technical Reports Server (NTRS)

    Endsley, Mica R.

    1991-01-01

    Situation awareness, a current mental mode of the environment, is critical to the ability of operators to perform complex and dynamic tasks. This should be particularly true for teleoperators, who are separated from the situation they need to be aware of. The design of the man-machine interface must be guided by the goal of maintaining and enhancing situation awareness. The objective of this work has been to build a foundation upon which research in the area can proceed. A model of dynamic human decision making which is inclusive of situation awareness will be presented, along with a definition of situation awareness. A method for measuring situation awareness will also be presented as a tool for evaluating design concepts. The Situation Awareness Global Assessment Technique (SAGAT) is an objective measure of situation awareness originally developed for the fighter cockpit environment. The results of SAGAT validation efforts will be presented. Implications of this research for teleoperators and other operators of dynamic systems will be discussed.

  16. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  17. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  18. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  19. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  20. 21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...

  1. 'Fly Like This': Natural Language Interface for UAV Mission Planning

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette

    2017-01-01

    With the increasing presence of unmanned aerial vehicles (UAVs) in everyday environments, the user base of these powerful and potentially intelligent machines is expanding beyond exclusively highly trained vehicle operators to include non-expert system users. Scientists seeking to augment costly and often inflexible methods of data collection historically used are turning towards lower cost and reconfigurable UAVs. These new users require more intuitive and natural methods for UAV mission planning. This paper explores two natural language interfaces - gesture and speech - for UAV flight path generation through individual user studies. Subjects who participated in the user studies also used a mouse-based interface for a baseline comparison. Each interface allowed the user to build flight paths from a library of twelve individual trajectory segments. Individual user studies evaluated performance, efficacy, and ease-of-use of each interface using background surveys, subjective questionnaires, and observations on time and correctness. Analysis indicates that natural language interfaces are promising alternatives to traditional interfaces. The user study data collected on the efficacy and potential of each interface will be used to inform future intuitive UAV interface design for non-expert users.

  2. Proceedings of the first workshop on Peripheral Machine Interfaces: going beyond traditional surface electromyography

    PubMed Central

    Castellini, Claudio; Artemiadis, Panagiotis; Wininger, Michael; Ajoudani, Arash; Alimusaj, Merkur; Bicchi, Antonio; Caputo, Barbara; Craelius, William; Dosen, Strahinja; Englehart, Kevin; Farina, Dario; Gijsberts, Arjan; Godfrey, Sasha B.; Hargrove, Levi; Ison, Mark; Kuiken, Todd; Marković, Marko; Pilarski, Patrick M.; Rupp, Rüdiger; Scheme, Erik

    2014-01-01

    One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it. PMID:25177292

  3. A restrained-torque-based motion instructor: forearm flexion/extension-driving exoskeleton

    NASA Astrophysics Data System (ADS)

    Nishimura, Takuya; Nomura, Yoshihiko; Sakamoto, Ryota

    2013-01-01

    When learning complicated movements by ourselves, we encounter such problems as a self-rightness. The self-rightness results in a lack of detail and objectivity, and it may cause to miss essences and even twist the essences. Thus, we sometimes fall into the habits of doing inappropriate motions. To solve these problems or to alleviate the problems as could as possible, we have been developed mechanical man-machine human interfaces to support us learning such motions as cultural gestures and sports form. One of the promising interfaces is a wearable exoskeleton mechanical system. As of the first try, we have made a prototype of a 2-link 1-DOF rotational elbow joint interface that is applied for teaching extension-flexion operations with forearms and have found its potential abilities for teaching the initiating and continuing flection motion of the elbow.

  4. A Workshop on the Gathering of Information for Problem Formulation

    DTIC Science & Technology

    1991-06-01

    the Al specialists is to design "artificially intelligent" computer environments that tutor students in much the same way that a human teacher might...tuning the interface betweeen student and machine, and are using a technique of in situ development to tune the system towaid realistic user needs. 141...of transferability to new domains, while the latter suffers from extreme fragility: the inability to cope with any input not strictly conforming with

  5. Man-Machine Interface (MMI) Requirements Definition and Design Guidelines

    DTIC Science & Technology

    1981-02-01

    be provided to interrogate the user to resolve any input ambiguities resulting from hardware limitations; see Smith and Goodwin, 1971 . Reference...Smith, S. L. and Goodwin, N. C’. Alphabetic data v entry via the Touch-Tone pad: A comment. Human Factors, 1971 , 13(2), 189-190. 41 All~ 1.0 General (con...software designer. Reference: Miller, R. B. Response time in man-computer conversational transactions. In Proceedings of the AFIPS kall Joint Computer

  6. Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.

  7. The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction.

    PubMed

    Schermer, Maartje

    2009-12-01

    Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have on our 'symbolic order'-on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.

  8. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  9. Pixels, people, perception, pet peeves, and possibilities: a look at displays

    NASA Astrophysics Data System (ADS)

    Task, H. Lee

    2007-04-01

    This year marks the 35 th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of experience in the display and display related areas.

  10. Designing Guiding Systems for Brain-Computer Interfaces

    PubMed Central

    Kosmyna, Nataliya; Lécuyer, Anatole

    2017-01-01

    Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400

  11. INCOMMANDS TDP: Human Factors Design and Evaluation Guide (PDT INCOMMANDS: Guide de Conception et d’Evaluation des Facteurs Humains)

    DTIC Science & Technology

    2009-12-01

    Human-Computer Interface (AHCI) Style Guide, (Report No. 64201-97U/61223), Veridian, Veda Operations, Dayton Ohio. [13] CSFAB Osga, G. and Kellmeyer, D...Interface (AHCI) Style Guide, (Report No. 64201-97U/61223), Veridian, Veda Operations, Dayton Ohio. [14] Osga, G. and Kellmeyer, D. (2000), Combat

  12. Model and experiments to optimize co-adaptation in a simplified myoelectric control system.

    PubMed

    Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A

    2018-04-01

    To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.

  13. The Portals 4.0 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less

  14. Tattoolike Polyaniline Microparticle-Doped Gold Nanowire Patches as Highly Durable Wearable Sensors.

    PubMed

    Gong, Shu; Lai, Daniel T H; Wang, Yan; Yap, Lim Wei; Si, Kae Jye; Shi, Qianqian; Jason, Naveen Noah; Sridhar, Tam; Uddin, Hemayet; Cheng, Wenlong

    2015-09-09

    Wearable and highly sensitive strain sensors are essential components of electronic skin for future biomonitoring and human machine interfaces. Here we report a low-cost yet efficient strategy to dope polyaniline microparticles into gold nanowire (AuNW) films, leading to 10 times enhancement in conductivity and ∼8 times improvement in sensitivity. Simultaneously, tattoolike wearable sensors could be fabricated simply by a direct "draw-on" strategy with a Chinese penbrush. The stretchability of the sensors could be enhanced from 99.7% to 149.6% by designing curved tattoo with different radius of curvatures. We also demonstrated roller coating method to encapusulate AuNWs sensors, exhibiting excellent water resistibility and durability. Because of improved conductivity of our sensors, they can directly interface with existing wireless circuitry, allowing for fabrication of wireless flexion sensors for a human finger-controlled robotic arm system.

  15. [Neurophysiological Foundations and Practical Realizations of the Brain-Machine Interfaces the Technology in Neurological Rehabilitation].

    PubMed

    Kaplan, A Ya

    2016-01-01

    Technology brain-computer interface (BCI) based on the registration and interpretation of EEG has recently become one of the most popular developments in neuroscience and psychophysiology. This is due not only to the intended future use of these technologies in many areas of practical human activity, but also to the fact that IMC--is a completely new paradigm in psychophysiology, allowing test hypotheses about the possibilities of the human brain to the development of skills of interaction with the outside world without the mediation of the motor system, i.e. only with the help of voluntary modulation of EEG generators. This paper examines the theoretical and experimental basis, the current state and prospects of development of training, communicational and assisting complexes based on BCI to control them without muscular effort on the basis of mental commands detected in the EEG of patients with severely impaired speech and motor system.

  16. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

    PubMed

    Crouser, R J; Chang, R

    2012-12-01

    Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.

  17. The Muscle Sensor for on-site neuroscience lectures to pave the way for a better understanding of brain-machine-interface research.

    PubMed

    Koizumi, Amane; Nagata, Osamu; Togawa, Morio; Sazi, Toshiyuki

    2014-01-01

    Neuroscience is an expanding field of science to investigate enigmas of brain and human body function. However, the majority of the public have never had the chance to learn the basics of neuroscience and new knowledge from advanced neuroscience research through hands-on experience. Here, we report that we produced the Muscle Sensor, a simplified electromyography, to promote educational understanding in neuroscience. The Muscle Sensor can detect myoelectric potentials which are filtered and processed as 3-V pulse signals to shine a light bulb and emit beep sounds. With this educational tool, we delivered "On-Site Neuroscience Lectures" in Japanese junior-high schools to facilitate hands-on experience of neuroscientific electrophysiology and to connect their text-book knowledge to advanced neuroscience researches. On-site neuroscience lectures with the Muscle Sensor pave the way for a better understanding of the basics of neuroscience and the latest topics such as how brain-machine-interface technology could help patients with disabilities such as spinal cord injuries. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  18. Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review

    PubMed Central

    Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik

    2017-01-01

    For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122

  19. Human computer interface guide, revision A

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.

  20. Computer-Based Tools for Evaluating Graphical User Interfaces

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  1. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  2. Online Artifact Removal for Brain-Computer Interfaces Using Support Vector Machines and Blind Source Separation

    PubMed Central

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259

  3. Online artifact removal for brain-computer interfaces using support vector machines and blind source separation.

    PubMed

    Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang

    2007-01-01

    We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.

  4. Decoding position, velocity, or goal: does it matter for brain-machine interfaces?

    PubMed

    Marathe, A R; Taylor, D M

    2011-04-01

    Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.

  5. Decoding position, velocity, or goal: Does it matter for brain-machine interfaces?

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2011-04-01

    Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.

  6. European public deliberation on brain machine interface technology: five convergence seminars.

    PubMed

    Jebari, Karim; Hansson, Sven-Ove

    2013-09-01

    We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.

  7. Histological evaluation of a chronically-implanted electrocorticographic electrode grid in a non-human primate

    NASA Astrophysics Data System (ADS)

    Degenhart, Alan D.; Eles, James; Dum, Richard; Mischel, Jessica L.; Smalianchuk, Ivan; Endler, Bridget; Ashmore, Robin C.; Tyler-Kabara, Elizabeth C.; Hatsopoulos, Nicholas G.; Wang, Wei; Batista, Aaron P.; Cui, X. Tracy

    2016-08-01

    Objective. Electrocorticography (ECoG), used as a neural recording modality for brain-machine interfaces (BMIs), potentially allows for field potentials to be recorded from the surface of the cerebral cortex for long durations without suffering the host-tissue reaction to the extent that it is common with intracortical microelectrodes. Though the stability of signals obtained from chronically implanted ECoG electrodes has begun receiving attention, to date little work has characterized the effects of long-term implantation of ECoG electrodes on underlying cortical tissue. Approach. We implanted and recorded from a high-density ECoG electrode grid subdurally over cortical motor areas of a Rhesus macaque for 666 d. Main results. Histological analysis revealed minimal damage to the cortex underneath the implant, though the grid itself was encapsulated in collagenous tissue. We observed macrophages and foreign body giant cells at the tissue-array interface, indicative of a stereotypical foreign body response. Despite this encapsulation, cortical modulation during reaching movements was observed more than 18 months post-implantation. Significance. These results suggest that ECoG may provide a means by which stable chronic cortical recordings can be obtained with comparatively little tissue damage, facilitating the development of clinically viable BMI systems.

  8. An Investment Behavior Analysis using by Brain Computer Interface

    NASA Astrophysics Data System (ADS)

    Suzuki, Kyoko; Kinoshita, Kanta; Miyagawa, Kazuhiro; Shiomi, Shinichi; Misawa, Tadanobu; Shimokawa, Tetsuya

    In this paper, we will construct a new Brain Computer Interface (BCI), for the purpose of analyzing human's investment decision makings. The BCI is made up of three functional parts which take roles of, measuring brain information, determining market price in an artificial market, and specifying investment decision model, respectively. When subjects make decisions, their brain information is conveyed to the part of specifying investment decision model through the part of measuring brain information, whereas, their decisions of investment order are sent to the part of artificial market to form market prices. Both the support vector machine and the 3 layered perceptron are used to assess the investment decision model. In order to evaluate our BCI, we conduct an experiment in which subjects and a computer trader agent trade shares of stock in the artificial market and test how the computer trader agent can forecast market price formation and investment decision makings from the brain information of subjects. The result of the experiment shows that the brain information can improve the accuracy of forecasts, and so the computer trader agent can supply market liquidity to stabilize market volatility without his loss.

  9. Virtual reality in surgical training.

    PubMed

    Lange, T; Indelicato, D J; Rosen, J M

    2000-01-01

    Virtual reality in surgery and, more specifically, in surgical training, faces a number of challenges in the future. These challenges are building realistic models of the human body, creating interface tools to view, hear, touch, feel, and manipulate these human body models, and integrating virtual reality systems into medical education and treatment. A final system would encompass simulators specifically for surgery, performance machines, telemedicine, and telesurgery. Each of these areas will need significant improvement for virtual reality to impact medicine successfully in the next century. This article gives an overview of, and the challenges faced by, current systems in the fast-changing field of virtual reality technology, and provides a set of specific milestones for a truly realistic virtual human body.

  10. Advances in data representation for hard/soft information fusion

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey C.; Coughlin, Dan; Hall, David L.; Graham, Jacob L.

    2012-06-01

    Information fusion is becoming increasingly human-centric. While past systems typically relegated humans to the role of analyzing a finished fusion product, current systems are exploring the role of humans as integral elements in a modular and extensible distributed framework where many tasks can be accomplished by either human or machine performers. For example, "participatory sensing" campaigns give humans the role of "soft sensors" by uploading their direct observations or as "soft sensor platforms" by using mobile devices to record human-annotated, GPS-encoded high quality photographs, video, or audio. Additionally, the role of "human-in-the-loop", in which individuals or teams using advanced human computer interface (HCI) tools such as stereoscopic 3D visualization, haptic interfaces, or aural "sonification" interfaces can help to effectively engage the innate human capability to perform pattern matching, anomaly identification, and semantic-based contextual reasoning to interpret an evolving situation. The Pennsylvania State University is participating in a Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office to investigate fusion of hard and soft data in counterinsurgency (COIN) situations. In addition to the importance of this research for Intelligence Preparation of the Battlefield (IPB), many of the same challenges and techniques apply to health and medical informatics, crisis management, crowd-sourced "citizen science", and monitoring environmental concerns. One of the key challenges that we have encountered is the development of data formats, protocols, and methodologies to establish an information architecture and framework for the effective capture, representation, transmission, and storage of the vastly heterogeneous data and accompanying metadata -- including capabilities and characteristics of human observers, uncertainty of human observations, "soft" contextual data, and information pedigree. This paper describes our findings and offers insights into the role of data representation in hard/soft fusion.

  11. Enhanced operator interface for hand-held landmine detector

    NASA Astrophysics Data System (ADS)

    Herman, Herman; McMahill, Jeffrey D.; Kantor, George

    2001-10-01

    As landmines get harder to detect, the complexity of landmine detectors has also been increasing. To increase the probability of detection and decrease the false alarm rate of low metallic landmines, many detectors employ multiple sensing modalities, which include radar and metal detector. Unfortunately, the operator interface for these new detectors stays pretty much the same as for the older detectors. Although the amount of information that the new detectors acquire has increased significantly, the interface has been limited to a simple audio interface. We are currently developing a hybrid audiovisual interface for enhancing the overall performance of the detector. The hybrid audiovisual interface combines the simplicity of the audio output with the rich spatial content of the video display. It is designed to optimally present the output of the detector and also to give the proper feedback to the operator. Instead of presenting all the data to the operator simultaneously, the interface allows the operator to access the information as needed. This capability is critical to avoid information overload, which can significantly reduce the performance of the operator. The audio is used as the primary notification signal, while the video is used for further feedback, discrimination, localization and sensor fusion. The idea is to let the operator gets the feedback that he needs and enable him to look at the data in the most efficient way. We are also looking at a hybrid man-machine detection system which utilizes precise sweeping by the machine and powerful human cognitive ability. In such a hybrid system, the operator is free to concentrate on discriminant task, such as manually fusing the output of the different sensing modalities, instead of worrying about the proper sweep technique. In developing this concept, we have been using the virtual mien lane to validate some of these concepts. We obtained some very encouraging results form our preliminary test. It clearly shows that with the proper feedback, the performance of the operator can be improved significantly in a very short time.

  12. Human-machine interfaces based on EMG and EEG applied to robotic systems.

    PubMed

    Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo

    2008-03-26

    Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.

  13. Modifications to Optimize the AH-1Z Human Machine Interface

    DTIC Science & Technology

    2013-04-18

    accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand...design flaws and guide future design and integration of increased capability. Additionally, employment of material solutions to provide aircrew with the...accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand

  14. A Survey of Research in Supervisory Control and Data Acquisition (SCADA)

    DTIC Science & Technology

    2014-09-01

    distance learning .2 The data acquired may be operationally oriented and used to better run the system, or it could be strategic in nature and used to...Technically the SCADA system is composed of the information technology (IT) that provides the human- machine interface (HMI) and stores and analyzes the data...systems work by learning what normal or benign traffic is and reporting on any abnormal traffic. These systems have the potential to detect zero-day

  15. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems

    DTIC Science & Technology

    2007-09-17

    Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens & Hollands, 2000). In SDT, the participants’ performance is characterized by two...probability, whereas their sensitivity will stay constant (Macmillan & Creelman , 1991; Wickens & Hollands, 2000). If this hypothesis holds, it will...Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study (2001a). Thus, C was used in the analysis HMIs for Trust and

  16. Microstructured graphene arrays for highly sensitive flexible tactile sensors.

    PubMed

    Zhu, Bowen; Niu, Zhiqiang; Wang, Hong; Leow, Wan Ru; Wang, Hua; Li, Yuangang; Zheng, Liyan; Wei, Jun; Huo, Fengwei; Chen, Xiaodong

    2014-09-24

    A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Effects of pulsed Nd:YAG laser on tensile bond strength and caries resistance of human enamel.

    PubMed

    Wen, X; Zhang, L; Liu, R; Deng, M; Wang, Y; Liu, L; Nie, X

    2014-01-01

    This study aims to evaluate the effects of pulsed Nd:YAG laser on the tensile bond strength (TBS) of resin to human enamel and caries resistance of human enamel. A total of 201 human premolars were used in this in vitro study. A flat enamel surface greater than 4 × 4 mm in area was prepared on each specimen using a low-speed cutting machine under a water coolant. Twenty-one specimens were divided into seven groups for morphology observations with no treatment, 35% phosphoric acid etching (30 seconds), and laser irradiation (30 seconds) of pulsed Nd:YAG laser with five different laser-parameter combinations. Another 100 specimens were used for TBS testing. They were embedded in self-cured acrylic resin and randomly divided into 10 groups. After enamel surface pretreatments according to the group design, resin was applied. The TBS values were tested using a universal testing machine. The other 80 specimens were randomly divided into eight groups for acid resistance evaluation. Scanning electron microscope (SEM) results showed that the enamel surfaces treated with 1.5 W/20 Hz and 2.0 W/20 Hz showed more etching-like appearance than those with other laser-parameter combinations. The laser-parameter combinations of 1.5 W/15 Hz and 1.5 W/20 Hz were found to be efficient for the TBS test. The mean TBS value of 14.45 ± 1.67 MPa in the laser irradiated group was significantly higher than that in the untreated group (3.48 ± 0.35 MPa) but lower than that in the 35% phosphoric acid group (21.50 ± 3.02 MPa). The highest mean TBS value of 26.64 ± 5.22 MPa was identified in the combination group (laser irradiation and then acid etching). Acid resistance evaluation showed that the pulsed Nd:YAG laser was efficient in preventing enamel demineralization. The SEM results of the fractured enamel surfaces, resin/enamel interfaces, and demineralization depths were consistent with those of the TBS test and the acid resistance evaluation. Pulsed Nd:YAG laser as an enamel surface pretreatment method presents a potential clinical application, especially for the caries-susceptible population or individuals with recently bleached teeth.

  18. Decoding semantic information from human electrocorticographic (ECoG) signals.

    PubMed

    Wang, Wei; Degenhart, Alan D; Sudre, Gustavo P; Pomerleau, Dean A; Tyler-Kabara, Elizabeth C

    2011-01-01

    This study examined the feasibility of decoding semantic information from human cortical activity. Four human subjects undergoing presurgical brain mapping and seizure foci localization participated in this study. Electrocorticographic (ECoG) signals were recorded while the subjects performed simple language tasks involving semantic information processing, such as a picture naming task where subjects named pictures of objects belonging to different semantic categories. Robust high-gamma band (60-120 Hz) activation was observed at the left inferior frontal gyrus (LIFG) and the posterior portion of the superior temporal gyrus (pSTG) with a temporal sequence corresponding to speech production and perception. Furthermore, Gaussian Naïve Bayes and Support Vector Machine classifiers, two commonly used machine learning algorithms for pattern recognition, were able to predict the semantic category of an object using cortical activity captured by ECoG electrodes covering the frontal, temporal and parietal cortices. These findings have implications for both basic neuroscience research and development of semantic-based brain-computer interface systems (BCI) that can help individuals with severe motor or communication disorders to express their intention and thoughts.

  19. The human role in space. Volume 3: Generalizations on human roles in space

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The human role in space was studied. The role and the degree of direct involvement of humans that will be required in future space missions, was investigated. Valid criteria for allocating functional activities between humans and machines were established. The technology requirements, ecnomics, and benefits of the human presence in space were examined. Factors which affect crew productivity include: internal architecture; crew support; crew activities; LVA systems; IVA/EVA interfaces; and remote systems management. The accomplished work is reported and the data and analyses from which the study results are derived are included. The results provide information and guidelines to enable NASA program managers and decision makers to establish, early in the design process, the most cost effective design approach for future space programs, through the optimal application of unique human skills and capabilities in space.

  20. The portals 4.0.1 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less

  1. Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control

    NASA Astrophysics Data System (ADS)

    Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.

    2012-04-01

    A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.

  2. Volitional enhancement of firing synchrony and oscillation by neuronal operant conditioning: interaction with neurorehabilitation and brain-machine interface

    PubMed Central

    Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu

    2014-01-01

    In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them. PMID:24567704

  3. Volitional enhancement of firing synchrony and oscillation by neuronal operant conditioning: interaction with neurorehabilitation and brain-machine interface.

    PubMed

    Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu

    2014-01-01

    In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them.

  4. Modified automatic teller machine prototype for older adults: a case study of participative approach to inclusive design.

    PubMed

    Chan, Chetwyn C H; Wong, Alex W K; Lee, Tatia M C; Chi, Iris

    2009-03-01

    The goal of this study was to enhance an existing automated teller machine (ATM) human-machine interface in order to accommodate the needs of older adults. Older adults were involved in the design and field test of the modified ATM prototype. The design of the user interface and functionality took the cognitive and physical abilities of older adults into account. The modified ATM system included only "cash withdrawal" and "transfer" functions based on the task demands and needs for services of older adults. One hundred and forty-one older adults (aged 60 or above) participated in the field test by operating modified or existing ATM systems. Those who operated the modified system were found to have significantly higher success rates than those who operated the existing system. The enhancement was most significant among older adults who had lower ATM-related abilities, a lower level of education, and no prior experience of using ATMs. This study demonstrates the usefulness of using a universal design and participatory approach to modify the existing ATM system for use by older adults. However, it also leads to a reduction in functionality of the enhanced system. Future studies should explore ways to develop a universal design ATM system which can satisfy the abilities and needs of all users in the entire population.

  5. On an efficient and effective intelligent transportation system (ITS) safety and traffic efficiency application with corresponding driver behavior

    NASA Astrophysics Data System (ADS)

    Ekedebe, Nnanna; Yu, Wei; Lu, Chao

    2015-06-01

    Driver distraction could result in safety compromises attributable to distractions from in-vehicle equipment usage [1]. The effective design of driver-vehicle interfaces (DVIs) and other human-machine interfaces (HMIs) together with their usability, and accessibility while driving become important [2]. Driving distractions can be classified as: visual distractions (any activity that takes your eyes away from the road), cognitive distraction (any activity that takes your mind away from the course of driving), and manual distractions (any activity that takes your hands away from the steering wheel [2]). Besides, multitasking during driving is a distractive activity that can increase the risks of vehicular accidents. To study the driver's behaviors on the safety of transportation system, using an in-vehicle driver notification application, we examined the effects of increasing driver distraction levels on the evaluation metrics of traffic efficiency and safety by using two types of driver models: young drivers (ages 16-25 years) and middle-age drivers (ages 30-45 years). Our evaluation data demonstrates that as a drivers distraction level is increased, less heed is given to change route directives from the in-vehicle on-board unit (OBU) using textual, visual, audio, and haptic notifications. Interestingly, middle-age drivers proved more effective/resilient in mitigating the negative effects of driver distraction over young drivers [2].

  6. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  7. Effect of Different Movement Speed Modes on Human Action Observation: An EEG Study.

    PubMed

    Luo, Tian-Jian; Lv, Jitu; Chao, Fei; Zhou, Changle

    2018-01-01

    Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.

  8. LOSITAN: a workbench to detect molecular adaptation based on a Fst-outlier method.

    PubMed

    Antao, Tiago; Lopes, Ana; Lopes, Ricardo J; Beja-Pereira, Albano; Luikart, Gordon

    2008-07-28

    Testing for selection is becoming one of the most important steps in the analysis of multilocus population genetics data sets. Existing applications are difficult to use, leaving many non-trivial, error-prone tasks to the user. Here we present LOSITAN, a selection detection workbench based on a well evaluated Fst-outlier detection method. LOSITAN greatly facilitates correct approximation of model parameters (e.g., genome-wide average, neutral Fst), provides data import and export functions, iterative contour smoothing and generation of graphics in a easy to use graphical user interface. LOSITAN is able to use modern multi-core processor architectures by locally parallelizing fdist, reducing computation time by half in current dual core machines and with almost linear performance gains in machines with more cores. LOSITAN makes selection detection feasible to a much wider range of users, even for large population genomic datasets, by both providing an easy to use interface and essential functionality to complete the whole selection detection process.

  9. A simple ERP method for quantitative analysis of cognitive workload in myoelectric prosthesis control and human-machine interaction.

    PubMed

    Deeny, Sean; Chicoine, Caitlin; Hargrove, Levi; Parrish, Todd; Jayaraman, Arun

    2014-01-01

    Common goals in the development of human-machine interface (HMI) technology are to reduce cognitive workload and increase function. However, objective and quantitative outcome measures assessing cognitive workload have not been standardized for HMI research. The present study examines the efficacy of a simple event-related potential (ERP) measure of cortical effort during myoelectric control of a virtual limb for use as an outcome tool. Participants trained and tested on two methods of control, direct control (DC) and pattern recognition control (PRC), while electroencephalographic (EEG) activity was recorded. Eighteen healthy participants with intact limbs were tested using DC and PRC under three conditions: passive viewing, easy, and hard. Novel auditory probes were presented at random intervals during testing, and significant task-difficulty effects were observed in the P200, P300, and a late positive potential (LPP), supporting the efficacy of ERPs as a cognitive workload measure in HMI tasks. LPP amplitude distinguished DC from PRC in the hard condition with higher amplitude in PRC, consistent with lower cognitive workload in PRC relative to DC for complex movements. Participants completed trials faster in the easy condition using DC relative to PRC, but completed trials more slowly using DC relative to PRC in the hard condition. The results provide promising support for ERPs as an outcome measure for cognitive workload in HMI research such as prosthetics, exoskeletons, and other assistive devices, and can be used to evaluate and guide new technologies for more intuitive HMI control.

  10. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.

    PubMed

    Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego

    2017-01-01

    Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  11. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  12. A Human-machine-interface Integrating Low-cost Sensors with a Neuromuscular Electrical Stimulation System for Post-stroke Balance Rehabilitation.

    PubMed

    Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban

    2016-04-12

    A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers.

  13. A Human-machine-interface Integrating Low-cost Sensors with a Neuromuscular Electrical Stimulation System for Post-stroke Balance Rehabilitation

    PubMed Central

    Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban

    2016-01-01

    A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers. PMID:27166666

  14. Remapping residual coordination for controlling assistive devices and recovering motor functions.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias B; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2015-12-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human-machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Robots with a gentle touch: advances in assistive robotics and prosthetics.

    PubMed

    Harwin, W S

    1999-01-01

    As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.

  16. The ergonomics approach for thin film transistor-liquid crystal display manufacturing process.

    PubMed

    Lu, Chih-Wei; Yao, Chia-Chun; Kuo, Chein-Wen

    2012-01-01

    The thin film transistor-liquid crystal display (TFT-LCD) has been used all over the world. Although the manufacture process of TFT-LCD was highly automated, employees are hired to do manual job in module assembly process. The operators may have high risk of musculoskeletal disorders because of the long work hours and the repetitive activities in an unfitted work station. The tools of this study were questionnaire, checklist and to evaluate the work place design. The result shows that the participants reported high musculoskeletal disorder symptoms in shoulder (59.8%), neck (49.5%), wrist (39.5%), and upper back (30.6%). And, to reduce the ergonomic risk factors, revising the height of the work benches, chairs and redesigning the truck to decrease the chance of unsuitable positions were recommended and to reduce other ergonomics hazards and seta good human machine interface and appropriate job design.

  17. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.

  18. Simulation of the human-telerobot interface on the Space Station

    NASA Technical Reports Server (NTRS)

    Stuart, Mark A.; Smith, Randy L.

    1993-01-01

    Many issues remain unresolved concerning the components of the human-telerobot interface presented in this work. It is critical that these components be optimally designed and arranged to ensure, not only that the overall system's goals are met, but but that the intended end-user has been optimally accommodated. With sufficient testing and evaluation throughout the development cycle, the selection of the components to use in the final telerobotic system can promote efficient, error-free performance. It is recommended that whole-system simulation with full-scale mockups be used to help design the human-telerobot interface. It is contended that the use of simulation can facilitate this design and evaluation process.

  19. Flexible Parsing.

    DTIC Science & Technology

    1986-06-30

    Machine Studies .. 14. Minton, S. N., Hayes, P. J., and Fain, J. E. Controlling Search in Flexible Parsing. Proc. Ninth Int. Jt. Conf. on Artificial...interaction through the COUSIN command interface", International Journal of Man- Machine Studies , Vol. 19, No. 3, September 1983, pp. 285-305. 8...in a gracefully interacting user interface," "Dynamic strategy selection in flexible parsing," and "Parsing spoken language: a semantic case frame

  20. Problems in modeling man machine control behavior in biodynamic environments

    NASA Technical Reports Server (NTRS)

    Jex, H. R.

    1972-01-01

    Reviewed are some current problems in modeling man-machine control behavior in a biodynamic environment. It is given in two parts: (1) a review of the models which are appropriate for manual control behavior and the added elements necessary to deal with biodynamic interfaces; and (2) a review of some biodynamic interface pilot/vehicle problems which have occurred, been solved, or need to be solved.

  1. Recognizing Disguised Faces: Human and Machine Evaluation

    PubMed Central

    Dhamecha, Tejas Indulal; Singh, Richa; Vatsa, Mayank; Kumar, Ajay

    2014-01-01

    Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images. PMID:25029188

  2. Combat Automation for Airborne Weapon Systems: Man/Machine Interface Trends and Technologies (L’Automatisation du Combat Aerien: Tendances et Technologies pour l’Interface Homme/Machine)

    DTIC Science & Technology

    1993-04-01

    Homme /Machine) Aocesion For ; 1 [ NTIS ’ D:i: Ü J-H CRA& l TAB 3...I’utilisateur. - Enfm, utilise avec le bouton droit de la souris, le poten- tiom&tre de temps 6coul6 permet de charger une alterna- tive dans le syst&me...a a a a rn£Q £ OB E o 15 l | I? ^©J&Mß) NATO ^ OTAN 7 RUE ANCELLE • 92200 NEUILLY-SÜR-SEINE DIFFUSION DES PUBLICATIONS FRANCE AGARD

  3. Human factors issues for interstellar spacecraft

    NASA Technical Reports Server (NTRS)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  4. Space Applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS). Volume 4: Supplement, Appendix 4.3: Candidate ARAMIS Capabilities

    NASA Technical Reports Server (NTRS)

    Miller, R. H.; Minsky, M. L.; Smith, D. B. S.

    1982-01-01

    Potential applications of automation, robotics, and machine intelligence systems (ARAMIS) to space activities, and to their related ground support functions, in the years 1985-2000, so that NASA may make informed decisions on which aspects of ARAMIS to develop. The study first identifies the specific tasks which will be required by future space projects. It then defines ARAMIS options which are candidates for those space project tasks, and evaluates the relative merits of these options. Finally, the study identifies promising applications of ARAMIS, and recommends specific areas for further research. The ARAMIS options defined and researched by the study group span the range from fully human to fully machine, including a number of intermediate options (e.g., humans assisted by computers, and various levels of teleoperation). By including this spectrum, the study searches for the optimum mix of humans and machines for space project tasks.

  5. A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom

    NASA Astrophysics Data System (ADS)

    Pan, Lizhi; Yang, Zhen; Zhang, Dingguo

    2015-10-01

    The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.

  6. A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom.

    PubMed

    Pan, Lizhi; Yang, Zhen; Zhang, Dingguo

    2015-10-01

    The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.

  7. Chip breaking system for automated machine tool

    DOEpatents

    Arehart, Theodore A.; Carey, Donald O.

    1987-01-01

    The invention is a rotary selectively directional valve assembly for use in an automated turret lathe for directing a stream of high pressure liquid machining coolant to the interface of a machine tool and workpiece for breaking up ribbon-shaped chips during the formation thereof so as to inhibit scratching or other marring of the machined surfaces by these ribbon-shaped chips. The valve assembly is provided by a manifold arrangement having a plurality of circumferentially spaced apart ports each coupled to a machine tool. The manifold is rotatable with the turret when the turret is positioned for alignment of a machine tool in a machining relationship with the workpiece. The manifold is connected to a non-rotational header having a single passageway therethrough which conveys the high pressure coolant to only the port in the manifold which is in registry with the tool disposed in a working relationship with the workpiece. To position the machine tools the turret is rotated and one of the tools is placed in a material-removing relationship of the workpiece. The passageway in the header and one of the ports in the manifold arrangement are then automatically aligned to supply the machining coolant to the machine tool workpiece interface for breaking up of the chips as well as cooling the tool and workpiece during the machining operation.

  8. Synaptic organic transistors with a vacuum-deposited charge-trapping nanosheet

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Hyun; Sung, Sujin; Yoon, Myung-Han

    2016-09-01

    Organic neuromorphic devices hold great promise for unconventional signal processing and efficient human-machine interfaces. Herein, we propose novel synaptic organic transistors devised to overcome the traditional trade-off between channel conductance and memory performance. A vacuum-processed, nanoscale metallic interlayer provides an ultra-flat surface for a high-mobility molecular film as well as a desirable degree of charge trapping, allowing for low-temperature fabrication of uniform device arrays on plastic. The device architecture is implemented by widely available electronic materials in combination with conventional deposition methods. Therefore, our results are expected to generate broader interests in incorporation of organic electronics into large-area neuromorphic systems, with potential in gate-addressable complex logic circuits and transparent multifunctional interfaces receiving direct optical and cellular stimulation.

  9. Use of parallel computing for analyzing big data in EEG studies of ambiguous perception

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.

    2018-02-01

    Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.

  10. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  11. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  12. Modifying the Human-Machine Interface Based on Quantitative Measurements of the Level of Awareness

    NASA Technical Reports Server (NTRS)

    Freund, Louis E.; Knapp, Benjamin

    1999-01-01

    This project got underway without funding approved during the summer of 1998. The initial project steps were to identify previously published work in the fields of error classification systems, physiological measurements of awareness, and related topics. This agenda was modified at the request of NASA Ames in August, 1998 to include supporting the new Cargo Air Association (CAA) evaluation of the Human Factors related to the ADS-B technology. Additional funding was promised to fully support both efforts. Work on library research ended in the late Fall, 1998 when the SJSU project directors were informed that NASA would not be adding to the initial funding of the research project as had been initially committed. However, NASA did provide additional funding for the CAA project activity. NASA elected to leave the research grant in place to provide a pathway for the CAA project funding to SJSU (San Jose State University) to support Dr. Freund's work on the CAA tasks. Dr. Knapp essentially terminated his involvement with the project at this time.

  13. Human machine interface to manually drive rhombic like vehicles such as transport casks in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo

    2015-07-01

    The Cask and Plug Remote Handling System (CPRHS) and the respective Cask Transfer System (CTS) are designed to transport activated components between the reactor and the hot cell buildings of ITER during maintenance operations. In nominal operation, the CPRHS/CTS shall operate autonomously under human supervision. However, in some unexpected situations, the automatic mode must be overridden and the vehicle must be remotely guided by a human operator due to the harsh conditions of the environment. The CPRHS/CTS is a rhombic-like vehicle with two independent steerable and drivable wheels along its longitudinal axis, giving it omni-directional capabilities. During manual guidance, themore » human operator has to deal with four degrees of freedom, namely the orientations and speeds of two wheels. This work proposes a Human Machine Interface (HMI) to manage the degrees of freedom and to remotely guide the CPRHS/CTS in ITER taking the most advantages of rhombic like capabilities. Previous work was done to drive each wheel independently, i.e., control the orientation and speed of each wheel independently. The results have shown that the proposed solution is inefficient. The attention of the human operator becomes focused in a single wheel. In addition, the proposed solution cannot assure that the commands accomplish the physical constrains of the vehicle, resulting in slippage or even in clashes. This work proposes a solution that consists in the control of the vehicle looking at the position of its center of mass and its heading in the world frame. The solution is implemented using a rotational disk to control the vehicle heading and a common analogue joystick to control the vector speed of the center of the mass of the vehicle. The number of degrees of freedom reduces to three, i.e., two angles (vehicle heading and the orientation of the vector speed) and a scalar (the magnitude of the speed vector). This is possible using a kinematic model based on the vehicle Instantaneous Center of Rotation (ICR): a geometric approach where, at each time instant, the vehicle describes a circumference (either with a finite or infinite radius). The inverse of the kinematic model transforms the three input parameters of the center of mass into the four parameters for the wheels, preserving the omni-directional capabilities. The solution is implemented and tested using a HMI with a control disk and an analog joystick with two axis. The control disk was specially designed for this solution and implemented using a programmable micro-controller. In the first set of experiments, the HMI communicates with a computer running a simulator of the CPRHS/CTS, with the vehicle kinematics and dynamics, moving in a map of the ITER buildings. In the second set of experiments, the HMI communicates with a scaled prototype of the CPRHS running in a mock-up scenario to obtain more realistic results. Several type of tests were performed to evaluate the usability of the HMI. Different human operators without knowledge neither experience with this interface were invited to test the HMI. The operators had to drive the vehicle from an initial place to a final destination under the following conditions: with a pre-computed path to help guidance, without any path, with the information of the closest obstacles and without any help. The performance was evaluated using the time duration of the operation, the energy required to perform the described path, the risk of collision and, in case of a pre-computed path, the comparison between paths. In addition, each operator tested the HMI several times to evaluate the performance along consecutive trials. (authors)« less

  14. Operation of micro and molecular machines: a new concept with its origins in interface science.

    PubMed

    Ariga, Katsuhiko; Ishihara, Shinsuke; Izawa, Hironori; Xia, Hong; Hill, Jonathan P

    2011-03-21

    A landmark accomplishment of nanotechnology would be successful fabrication of ultrasmall machines that can work like tweezers, motors, or even computing devices. Now we must consider how operation of micro- and molecular machines might be implemented for a wide range of applications. If these machines function only under limited conditions and/or require specialized apparatus then they are useless for practical applications. Therefore, it is important to carefully consider the access of functionality of the molecular or nanoscale systems by conventional stimuli at the macroscopic level. In this perspective, we will outline the position of micro- and molecular machines in current science and technology. Most of these machines are operated by light irradiation, application of electrical or magnetic fields, chemical reactions, and thermal fluctuations, which cannot always be applied in remote machine operation. We also propose strategies for molecular machine operation using the most conventional of stimuli, that of macroscopic mechanical force, achieved through mechanical operation of molecular machines located at an air-water interface. The crucial roles of the characteristics of an interfacial environment, i.e. connection between macroscopic dimension and nanoscopic function, and contact of media with different dielectric natures, are also described.

  15. Intention Concepts and Brain-Machine Interfacing

    PubMed Central

    Thinnes-Elker, Franziska; Iljina, Olga; Apostolides, John Kyle; Kraemer, Felicitas; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio

    2012-01-01

    Intentions, including their temporal properties and semantic content, are receiving increased attention, and neuroscientific studies in humans vary with respect to the topography of intention-related neural responses. This may reflect the fact that the kind of intentions investigated in one study may not be exactly the same kind investigated in the other. Fine-grained intention taxonomies developed in the philosophy of mind may be useful to identify the neural correlates of well-defined types of intentions, as well as to disentangle them from other related mental states, such as mere urges to perform an action. Intention-related neural signals may be exploited by brain-machine interfaces (BMIs) that are currently being developed to restore speech and motor control in paralyzed patients. Such BMI devices record the brain activity of the agent, interpret (“decode”) the agent’s intended action, and send the corresponding execution command to an artificial effector system, e.g., a computer cursor or a robotic arm. In the present paper, we evaluate the potential of intention concepts from philosophy of mind to improve the performance and safety of BMIs based on higher-order, intention-related control signals. To this end, we address the distinction between future-, present-directed, and motor intentions, as well as the organization of intentions in time, specifically to what extent it is sequential or hierarchical. This has consequences as to whether these different types of intentions can be expected to occur simultaneously or not. We further illustrate how it may be useful or even necessary to distinguish types of intentions exposited in philosophy, including yes- vs. no-intentions and oblique vs. direct intentions, to accurately decode the agent’s intentions from neural signals in practical BMI applications. PMID:23162504

  16. [A cyborg is only human].

    PubMed

    Schermer, Maartje H N

    2013-01-01

    New biomedical technologies make it possible to replace parts of the human body or to substitute its functions. Examples include artificial joints, eye lenses and arterial stents. Newer technologies use electronics and software, for example in brain-computer interfaces such as retinal implants and the exoskeleton MindWalker. Gradually we are creating cyborgs: hybrids of man and machine. This raises the question: are cyborgs still humans? It is argued that they are. First, because employing technology is a typically human characteristic. Second, because in western thought the human mind, and not the body, is considered to be the seat of personhood. However, it has been argued by phenomenological philosophers that the body is more than just an object but is also a subject, important for human identity. From this perspective, we can appreciate that a bionic body does not make one less human, but it does influence the experience of being human.

  17. Toward more versatile and intuitive cortical brain-machine interfaces.

    PubMed

    Andersen, Richard A; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson

    2014-09-22

    Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. 40 CFR 63.464 - Alternative standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (a)(2) of this section. (1) If the cleaning machine has a solvent/air interface, as defined in § 63... cleaning machines 153 New in-line solvent cleaning machines 99 (2) If the cleaning machine is a batch vapor... requirements specified in paragraphs (a)(2)(i) and (a)(2)(ii) of this section. (i) Maintain a log of solvent...

  19. 40 CFR 63.464 - Alternative standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (a)(2) of this section. (1) If the cleaning machine has a solvent/air interface, as defined in § 63... cleaning machines 153 New in-line solvent cleaning machines 99 (2) If the cleaning machine is a batch vapor... requirements specified in paragraphs (a)(2)(i) and (a)(2)(ii) of this section. (i) Maintain a log of solvent...

  20. Artificial intelligence applied to process signal analysis

    NASA Technical Reports Server (NTRS)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  1. A chronic generalized bi-directional brain-machine interface.

    PubMed

    Rouse, A G; Stanslaski, S R; Cong, P; Jensen, R M; Afshar, P; Ullestad, D; Gupta, R; Molnar, G F; Moran, D W; Denison, T J

    2011-06-01

    A bi-directional neural interface (NI) system was designed and prototyped by incorporating a novel neural recording and processing subsystem into a commercial neural stimulator architecture. The NI system prototype leverages the system infrastructure from an existing neurostimulator to ensure reliable operation in a chronic implantation environment. In addition to providing predicate therapy capabilities, the device adds key elements to facilitate chronic research, such as four channels of electrocortigram/local field potential amplification and spectral analysis, a three-axis accelerometer, algorithm processing, event-based data logging, and wireless telemetry for data uploads and algorithm/configuration updates. The custom-integrated micropower sensor and interface circuits facilitate extended operation in a power-limited device. The prototype underwent significant verification testing to ensure reliability, and meets the requirements for a class CF instrument per IEC-60601 protocols. The ability of the device system to process and aid in classifying brain states was preclinically validated using an in vivo non-human primate model for brain control of a computer cursor (i.e. brain-machine interface or BMI). The primate BMI model was chosen for its ability to quantitatively measure signal decoding performance from brain activity that is similar in both amplitude and spectral content to other biomarkers used to detect disease states (e.g. Parkinson's disease). A key goal of this research prototype is to help broaden the clinical scope and acceptance of NI techniques, particularly real-time brain state detection. These techniques have the potential to be generalized beyond motor prosthesis, and are being explored for unmet needs in other neurological conditions such as movement disorders, stroke and epilepsy.

  2. Neurofeedback Training for BCI Control

    NASA Astrophysics Data System (ADS)

    Neuper, Christa; Pfurtscheller, Gert

    Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].

  3. Application of the SCADA system in wastewater treatment plants.

    PubMed

    Dieu, B

    2001-01-01

    The implementation of the SCADA system has a positive impact on the operations, maintenance, process improvement and savings for the City of Houston's Wastewater Operations branch. This paper will discuss the system's evolvement, the external/internal architecture, and the human-machine-interface graphical design. Finally, it will demonstrate the system's successes in monitoring the City's sewage and sludge collection/distribution systems, wet-weather facilities and wastewater treatment plants, complying with the USEPA requirements on the discharge, and effectively reducing the operations and maintenance costs.

  4. Active tactile exploration using a brain-machine-brain interface.

    PubMed

    O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L

    2011-10-05

    Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.

  5. [Training cortical signals by means of a BMI-EEG system, its evolution and intervention. A case report].

    PubMed

    Monge-Pereira, E; Casatorres Perez-Higueras, I; Fernandez-Gonzalez, P; Ibanez-Pereda, J; Serrano, J I; Molina-Rueda, F

    2017-04-16

    In the last years, new technologies such as the brain-machine interfaces (BMI) have been incorporated in the rehabilitation process of subjects with stroke. These systems are able to detect motion intention, analyzing the cortical signals using different techniques such as the electroencephalography (EEG). This information could guide different interfaces such as robotic devices, electrical stimulation or virtual reality. A 40 years-old man with stroke with two months from the injury participated in this study. We used a BMI based on EEG. The subject's motion intention was analyzed calculating the event-related desynchronization. The upper limb motor function was evaluated with the Fugl-Meyer Assessment and the participant's satisfaction was evaluated using the QUEST 2.0. The intervention using a physical therapist as an interface was carried out without difficulty. The BMI systems detect cortical changes in a subacute stroke subject. These changes are coherent with the evolution observed using the Fugl-Meyer Assessment.

  6. Applications of airborne ultrasound in human-computer interaction.

    PubMed

    Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre

    2014-09-01

    Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.

  7. Physics-based and human-derived information fusion for analysts

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael

    2017-05-01

    Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.

  8. Aircraft-vehicle system interaction. An evaluation of NASA's program in human factors research

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Research in the areas of man machine interaction and human factors engineering are assessed in relation to improved effeciency and aviation safety. The appropriateness, relevance, adequacy, and timeliness of the research is evaluated, and recommendations are provided regarding the objectives, approach and content.

  9. Transforming Biology Assessment with Machine Learning: Automated Scoring of Written Evolutionary Explanations

    ERIC Educational Resources Information Center

    Nehm, Ross H.; Ha, Minsu; Mayfield, Elijah

    2012-01-01

    This study explored the use of machine learning to automatically evaluate the accuracy of students' written explanations of evolutionary change. Performance of the Summarization Integrated Development Environment (SIDE) program was compared to human expert scoring using a corpus of 2,260 evolutionary explanations written by 565 undergraduate…

  10. Vibrotactile display for mobile applications based on dielectric elastomer stack actuators

    NASA Astrophysics Data System (ADS)

    Matysek, Marc; Lotz, Peter; Flittner, Klaus; Schlaak, Helmut F.

    2010-04-01

    Dielectric elastomer stack actuators (DESA) offer the possibility to build actuator arrays at very high density. The driving voltage can be defined by the film thickness, ranging from 80 μm down to 5 μm and driving field strength of 30 V/μm. In this paper we present the development of a vibrotactile display based on multilayer technology. The display is used to present several operating conditions of a machine in form of haptic information to a human finger. As an example the design of a mp3-player interface is introduced. To build up an intuitive and user friendly interface several aspects of human haptic perception have to be considered. Using the results of preliminary user tests the interface is designed and an appropriate actuator layout is derived. Controlling these actuators is important because there are many possibilities to present different information, e.g. by varying the driving parameters. A built demonstrator is used to verify the concept: a high recognition rate of more than 90% validates the concept. A characterization of mechanical and electrical parameters proofs the suitability of dielectric elastomer stack actuators for the use in mobile applications.

  11. Mold Heating and Cooling Pump Package Operator Interface Controls Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josh A. Salmond

    2009-08-07

    The modernization of the Mold Heating and Cooling Pump Package Operator Interface (MHC PP OI) consisted of upgrading the antiquated single board computer with a proprietary operating system to off-the-shelf hardware and off-the-shelf software with customizable software options. The pump package is the machine interface between a central heating and cooling system that pumps heat transfer fluid through an injection or compression mold base on a local plastic molding machine. The operator interface provides the intelligent means of controlling this pumping process. Strict temperature control of a mold allows the production of high quality parts with tight tolerances and lowmore » residual stresses. The products fabricated are used on multiple programs.« less

  12. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    PubMed Central

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-01-01

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques. PMID:28561750

  13. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics.

    PubMed

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-05-31

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  14. A Wireless 32-Channel Implantable Bidirectional Brain Machine Interface

    PubMed Central

    Su, Yi; Routhu, Sudhamayee; Moon, Kee S.; Lee, Sung Q.; Youm, WooSub; Ozturk, Yusuf

    2016-01-01

    All neural information systems (NIS) rely on sensing neural activity to supply commands and control signals for computers, machines and a variety of prosthetic devices. Invasive systems achieve a high signal-to-noise ratio (SNR) by eliminating the volume conduction problems caused by tissue and bone. An implantable brain machine interface (BMI) using intracortical electrodes provides excellent detection of a broad range of frequency oscillatory activities through the placement of a sensor in direct contact with cortex. This paper introduces a compact-sized implantable wireless 32-channel bidirectional brain machine interface (BBMI) to be used with freely-moving primates. The system is designed to monitor brain sensorimotor rhythms and present current stimuli with a configurable duration, frequency and amplitude in real time to the brain based on the brain activity report. The battery is charged via a novel ultrasonic wireless power delivery module developed for efficient delivery of power into a deeply-implanted system. The system was successfully tested through bench tests and in vivo tests on a behaving primate to record the local field potential (LFP) oscillation and stimulate the target area at the same time. PMID:27669264

  15. Quantum neural network based machine translator for Hindi to English.

    PubMed

    Narayan, Ravi; Singh, V P; Chakraverty, S

    2014-01-01

    This paper presents the machine learning based machine translation system for Hindi to English, which learns the semantically correct corpus. The quantum neural based pattern recognizer is used to recognize and learn the pattern of corpus, using the information of part of speech of individual word in the corpus, like a human. The system performs the machine translation using its knowledge gained during the learning by inputting the pair of sentences of Devnagri-Hindi and English. To analyze the effectiveness of the proposed approach, 2600 sentences have been evaluated during simulation and evaluation. The accuracy achieved on BLEU score is 0.7502, on NIST score is 6.5773, on ROUGE-L score is 0.9233, and on METEOR score is 0.5456, which is significantly higher in comparison with Google Translation and Bing Translation for Hindi to English Machine Translation.

  16. Man-Machine Integration Design and Analysis System (MIDAS) v5: Augmentations, Motivations, and Directions for Aeronautics Applications

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2011-01-01

    As automation and advanced technologies are introduced into transport systems ranging from the Next Generation Air Transportation System termed NextGen, to the advanced surface transportation systems as exemplified by the Intelligent Transportations Systems, to future systems designed for space exploration, there is an increased need to validly predict how the future systems will be vulnerable to error given the demands imposed by the assistive technologies. One formalized approach to study the impact of assistive technologies on the human operator in a safe and non-obtrusive manner is through the use of human performance models (HPMs). HPMs play an integral role when complex human-system designs are proposed, developed, and tested. One HPM tool termed the Man-machine Integration Design and Analysis System (MIDAS) is a NASA Ames Research Center HPM software tool that has been applied to predict human-system performance in various domains since 1986. MIDAS is a dynamic, integrated HPM and simulation environment that facilitates the design, visualization, and computational evaluation of complex man-machine system concepts in simulated operational environments. The paper will discuss a range of aviation specific applications including an approach used to model human error for NASA s Aviation Safety Program, and what-if analyses to evaluate flight deck technologies for NextGen operations. This chapter will culminate by raising two challenges for the field of predictive HPMs for complex human-system designs that evaluate assistive technologies: that of (1) model transparency and (2) model validation.

  17. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  18. Software platform for managing the classification of error- related potentials of observers

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  19. Mobile Tactical HF/VHF/EW System for Ground Forces

    DTIC Science & Technology

    1989-09-01

    presen- tation of what I have learned . I would like to thank my advisor, Professor Robert Partelow, and co-advisor, Commander James R. Powell, for the...analyze newly developed systems to determine how the man- machine interfaces of such systems can best be designed for optimal use by the operators. B...terminals and other controls. If factors like luminance ratio, reflectance, glare illuminance are allowed for good man- machine interface then an effective

  20. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

Top