Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.
Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745
NASA Astrophysics Data System (ADS)
Lin, Y.; Zhang, W. J.
2005-02-01
This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.
Optimal design method to minimize users' thinking mapping load in human-machine interactions.
Huang, Yanqun; Li, Xu; Zhang, Jie
2015-01-01
The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.
Human machine interface display design document.
DOT National Transportation Integrated Search
2008-01-01
The purpose of this document is to describe the design for the human machine interface : (HMI) display for the Next Generation 9-1-1 (NG9-1-1) System (or system of systems) : based on the initial Tier 1 requirements identified for the NG9-1-1 S...
ERIC Educational Resources Information Center
Johnson, Christopher W.
1996-01-01
The development of safety-critical systems (aircraft cockpits and reactor control rooms) is qualitatively different from that of other interactive systems. These differences impose burdens on design teams that must ensure the development of human-machine interfaces. Analyzes strengths and weaknesses of formal methods for the design of user…
NASA Astrophysics Data System (ADS)
Ardi, S.; Ardyansyah, D.
2018-02-01
In the Manufacturing of automotive spare parts, increased sales of vehicles is resulted in increased demand for production of engine valve of the customer. To meet customer demand, we carry out improvement and overhaul of the NTVS-2894 seat grinder machine on a machining line. NTVS-2894 seat grinder machine has been decreased machine productivity, the amount of trouble, and the amount of downtime. To overcome these problems on overhaul the NTVS-2984 seat grinder machine include mechanical and programs, is to do the design and manufacture of HMI (Human Machine Interface) GP-4501T program. Because of the time prior to the overhaul, NTVS-2894 seat grinder machine does not have a backup HMI (Human Machine Interface) program. The goal of the design and manufacture in this program is to improve the achievement of production, and allows an operator to operate beside it easier to troubleshoot the NTVS-2894 seat grinder machine thereby reducing downtime on the NTVS-2894 seat grinder machine. The results after the design are HMI program successfully made it back, machine productivity increased by 34.8%, the amount of trouble, and downtime decreased 40% decrease from 3,160 minutes to 1,700 minutes. The implication of our design, it could facilitate the operator in operating machine and the technician easer to maintain and do the troubleshooting the machine problems.
Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.
Eom, Hwisoo; Lee, Sang Hun
2015-06-12
A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.
Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles
Eom, Hwisoo; Lee, Sang Hun
2015-01-01
A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model. PMID:26076406
An operator interface design for a telerobotic inspection system
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tso, Kam S.; Hayati, Samad
1993-01-01
The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Materials and optimized designs for human-machine interfaces via epidermal electronics.
Jeong, Jae-Woong; Yeo, Woon-Hong; Akhtar, Aadeel; Norton, James J S; Kwack, Young-Jin; Li, Shuo; Jung, Sung-Young; Su, Yewang; Lee, Woosik; Xia, Jing; Cheng, Huanyu; Huang, Yonggang; Choi, Woon-Seop; Bretl, Timothy; Rogers, John A
2013-12-17
Thin, soft, and elastic electronics with physical properties well matched to the epidermis can be conformally and robustly integrated with the skin. Materials and optimized designs for such devices are presented for surface electromyography (sEMG). The findings enable sEMG from wide ranging areas of the body. The measurements have quality sufficient for advanced forms of human-machine interface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Tool for Assessing the Text Legibility of Digital Human Machine Interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roger Lew; Ronald L. Boring; Thomas A. Ulrich
2015-08-01
A tool intended to aid qualified professionals in the assessment of the legibility of text presented on a digital display is described. The assessment of legibility is primarily for the purposes of designing and analyzing human machine interfaces in accordance with NUREG-0700 and MIL-STD 1472G. The tool addresses shortcomings of existing guidelines by providing more accurate metrics of text legibility with greater sensitivity to design alternatives.
Visualization tool for human-machine interface designers
NASA Astrophysics Data System (ADS)
Prevost, Michael P.; Banda, Carolyn P.
1991-06-01
As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.
Transfer of control system interface solutions from other domains to the thermal power industry.
Bligård, L-O; Andersson, J; Osvalder, A-L
2012-01-01
In a thermal power plant the operators' roles are to control and monitor the process to achieve efficient and safe production. To achieve this, the human-machine interfaces have a central part. The interfaces need to be updated and upgraded together with the technical functionality to maintain optimal operation. One way of achieving relevant updates is to study other domains and see how they have solved similar issues in their design solutions. The purpose of this paper is to present how interface design solution ideas can be transferred from domains with operator control to thermal power plants. In the study 15 domains were compared using a model for categorisation of human-machine systems. The result from the domain comparison showed that nuclear power, refinery and ship engine control were most similar to thermal power control. From the findings a basic interface structure and three specific display solutions were proposed for thermal power control: process parameter overview, plant overview, and feed water view. The systematic comparison of the properties of a human-machine system allowed interface designers to find suitable objects, structures and navigation logics in a range of domains that could be transferred to the thermal power domain.
Design of Human-Machine Interface and altering of pelvic obliquity with RGR Trainer.
Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo
2011-01-01
The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system's ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking - in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. © 2011 IEEE
Design of Human – Machine Interface and Altering of Pelvic Obliquity with RGR Trainer
Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo
2012-01-01
The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system’s ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking – in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. PMID:22275693
Human Machine Interfaces for Teleoperators and Virtual Environments
NASA Technical Reports Server (NTRS)
Durlach, Nathaniel I. (Compiler); Sheridan, Thomas B. (Compiler); Ellis, Stephen R. (Compiler)
1991-01-01
In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models.
1981-02-01
the machine . ARI’s efforts in this area focus on human perfor- mance problems related to interactions with command and control centers, and on issues...improvement of the user- machine interface. Lacking consistent design principles, current practice results in a fragmented and unsystematic approach to system...complexity in the user- machine interface of BAS, ARI supported this effort for develop- me:nt of an online language for Army tactical intelligence
Adaptive displays and controllers using alternative feedback.
Repperger, D W
2004-12-01
Investigations on the design of haptic (force reflecting joystick or force display) controllers were conducted by viewing the display of force information within the context of several different paradigms. First, using analogies from electrical and mechanical systems, certain schemes of the haptic interface were hypothesized which may improve the human-machine interaction with respect to various criteria. A discussion is given on how this interaction benefits the electrical and mechanical system. To generalize this concept to the design of human-machine interfaces, three studies with haptic mechanisms were then synthesized and analyzed.
Human-machine interface hardware: The next decade
NASA Technical Reports Server (NTRS)
Marcus, Elizabeth A.
1991-01-01
In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.
DESIGN AND EVALUATION OF INDIVIDUAL ELEMENTS OF THE INTERFACE FOR AN AGRICULTURAL MACHINE.
Rakhra, Aadesh K; Mann, Danny D
2018-01-29
If a user-centered approach is not used to design information displays, the quantity and quality of information presented to the user may not match the needs of the user, or it may exceed the capability of the human operator for processing and using that information. The result may be an excessive mental workload and reduced situation awareness of the operator, which can negatively affect the machine performance and operational outcomes. The increasing use of technology in agricultural machines may expose the human operator to excessive and undesirable information if the operator's information needs and information processing capabilities are ignored. In this study, a user-centered approach was used to design specific interface elements for an agricultural air seeder. Designs of the interface elements were evaluated in a laboratory environment by developing high-fidelity prototypes. Evaluations of the user interface elements yielded significant improvement in situation awareness (up to 11%; overall mean difference = 5.0 (4.8%), 95% CI (6.4728, 3.5939), p 0.0001). Mental workload was reduced by up to 19.7% (overall mean difference = -5.2 (-7.9%), n = 30, a = 0.05). Study participants rated the overall performance of the newly designed user-centered interface elements higher in comparison to the previous designs (overall mean difference = 27.3 (189.8%), 99% CI (35.150, 19.384), p 0.0001. Copyright© by the American Society of Agricultural Engineers.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Human factors in space telepresence
NASA Technical Reports Server (NTRS)
Akin, D. L.; Howard, R. D.; Oliveria, J. S.
1983-01-01
The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing.
Delivering key signals to the machine: seeking the electric signal that muscles emanate
NASA Astrophysics Data System (ADS)
Bani Hashim, A. Y.; Maslan, M. N.; Izamshah, R.; Mohamad, I. S.
2014-11-01
Due to the limitation of electric power generation in the human body, present human-machine interfaces have not been successful because of the nature of standard electronics circuit designs, which do not consider the specifications of signals that resulted from the skin. In general, the outcomes and applications of human-machine interfaces are limited to custom-designed subsystems, such as neuroprosthesis. We seek to model the bio dynamical of sub skin into equivalent mathematical definitions, descriptions, and theorems. Within the human skin, there are networks of nerves that permit the skin to function as a multi dimension transducer. We investigate the nature of structural skin. Apart from multiple networks of nerves, there are other segments within the skin such as minute muscles. We identify the segments that are active when there is an electromyography activity. When the nervous system is firing signals, the muscle is being stimulated. We evaluate the phenomena of biodynamic of the muscles that is concerned with the electromyography activity of the nervous system. In effect, we design a relationship between the human somatosensory and synthetic systems sensory as the union of a complete set of the new domain of the functional system. This classifies electromyogram waveforms linked to intent thought of an operator. The system will become the basis for delivering key signals to machine such that the machine is under operator's intent, hence slavery.
Man-machine interface for the control of a lunar transport machine
NASA Technical Reports Server (NTRS)
Ashley, Richard; Bacon, Loring; Carlton, Scott Tim; May, Mark; Moore, Jimmy; Peek, Dennis
1987-01-01
A proposed first generation human interface control panel is described which will be used to control SKITTER, a three-legged lunar walking machine. Under development at Georgia Tech, SKITTER will be a multi-purpose, un-manned vehicle capable of preparing a site for the proposed lunar base in advance of the arrival of men. This walking machine will be able to accept modular special purpose tools, such as a crane, a core sampling drill, and a digging device, among others. The project was concerned with the design of a human interface which could be used, from earth, to control the movements of SKITTER on the lunar surface. Preliminary inquiries were also made into necessary modifications required to adapt the panel to both a shirt-sleeve lunar environment and to a mobile unit which could be used by a man in a space suit at a lunar work site.
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon
2018-01-01
Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas. PMID:29364861
Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon; Yeo, Woon-Hong
2018-01-24
Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.
Formal verification of human-automation interaction
NASA Technical Reports Server (NTRS)
Degani, Asaf; Heymann, Michael
2002-01-01
This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.
Study About Ceiling Design for Main Control Room of NPP with HFE
NASA Astrophysics Data System (ADS)
Gu, Pengfei; Ni, Ying; Chen, Weihua; Chen, Bo; Zhang, Jianbo; Liang, Huihui
Recently since human factor engineering (HFE) has been used in control room design of nuclear power plant (NPP), the human-machine interface (HMI) has been gradual to develop harmoniously, especially the use of the digital technology. Comparing with the analog technology which was used to human-machine interface in the past, human-machine interaction has been more enhanced. HFE and the main control room (MCR) design engineering of NPP is a combination of multidisciplinary cross, mainly related to electrical and instrument control, reactor, machinery, systems engineering and management disciplines. However, MCR is not only equipped with HMI provided by the equipments, but also more important for the operator to provide a work environment, such as the main control room ceiling. The ceiling design of main control room related to HFE which influences the performance of staff should also be considered in the design of the environment and aesthetic factors, especially the introduction of professional design experience and evaluation method. Based on Ling Ao phase II and Hong Yanhe project implementation experience, the study analyzes lighting effect, space partition, vision load about the ceiling of main control room of NPP. Combining with the requirements of standards, the advantages and disadvantages of the main control room ceiling design has been discussed, and considering the requirements of lightweight, noise reduction, fire prevention, moisture protection, the ceiling design solution of the main control room also has been discussed.
Software Engineering for User Interfaces. Technical Report.
ERIC Educational Resources Information Center
Draper, Stephen W.; Norman, Donald A.
The discipline of software engineering can be extended in a natural way to deal with the issues raised by a systematic approach to the design of human-machine interfaces. The user should be treated as part of the system being designed and projects should be organized to take into account the current lack of a priori knowledge of user interface…
Patzel-Mattern, Katja
2005-01-01
The 20th Century is the century of of technical artefacts. With their existance and use they create an artificial reality, within which humans have to position themselves. Psychotechnik is an attempt to enable humans for this positioning. It gained importance in Germany after World War I and had its heyday between 1919 and 1926. On the basis of the activity of the engineer and supporter of Psychotechnik Georg Schlesinger, whose particular interest were disabled soldiers, the essay on hand will investigate the understanding of the body and the human being of Psychotechnik as an applied science. It turned out, that the biggest achievement of Psychotechnik was to establish a new view of the relation between human being and machine. Thus it helped to show that the human-machine-interface is a shapable unit. Psychotechnik sees the human body and its physique as the last instance for the design of machines. Its main concern is to optimize the relation between human being and machine rather than to standardize human beings according to the construction of machines. After her splendid rise during the Weimar Republic and her rapid decline since the late 1920s Psychotechnik nowadays gains scientifical attention as a historical phenomenon. The main attention in the current discourse lies on the aspects conserning philosophy of science: the unity of body and soul, the understanding of the human-machine-interface as a shapable unit and the human being as a last instance of this unit.
NASA Technical Reports Server (NTRS)
Howard, S. D.
1987-01-01
Effective user interface design in software systems is a complex task that takes place without adequate modeling tools. By combining state transition diagrams and the storyboard technique of filmmakers, State Transition Storyboards were developed to provide a detailed modeling technique for the Goldstone Solar System Radar Data Acquisition System human-machine interface. Illustrations are included with a description of the modeling technique.
Future Cyborgs: Human-Machine Interface for Virtual Reality Applications
2007-04-01
FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford
Man-systems integration and the man-machine interface
NASA Technical Reports Server (NTRS)
Hale, Joseph P.
1990-01-01
Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).
Intelligent Adaptive Interface: A Design Tool for Enhancing Human-Machine System Performances
2009-10-01
and customizable. Thus, an intelligent interface should tailor its parameters to certain prescribed specifications or convert itself and adjust to...Computer Interaction 3(2): 87-122. [51] Schereiber, G., Akkermans, H., Anjewierden, A., de Hoog , R., Shadbolt, N., Van de Velde, W., & Wielinga, W
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Human perceptual deficits as factors in computer interface test and evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowser, S.E.
1992-06-01
Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less
Gloved Human-Machine Interface
NASA Technical Reports Server (NTRS)
Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)
2015-01-01
Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.
Interface design in the process industries
NASA Technical Reports Server (NTRS)
Beaverstock, M. C.; Stassen, H. G.; Williamson, R. A.
1977-01-01
Every operator runs his plant in accord with his own mental model of the process. In this sense, one characteristic of an ideal man-machine interface is that it be in harmony with that model. With this theme in mind, the paper first reviews the functions of the process operator and compares them with human operators involved in control situations previously studied outside the industrial environment (pilots, air traffic controllers, helmsmen, etc.). A brief history of the operator interface in the process industry and the traditional methodology employed in its design is then presented. Finally, a much more fundamental approach utilizing a model definition of the human operator's behavior is presented.
Structure design of lower limb exoskeletons for gait training
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Ziqiang; Tao, Chunjing; Ji, Run
2015-09-01
Due to the close physical interaction between human and machine in process of gait training, lower limb exoskeletons should be safe, comfortable and able to smoothly transfer desired driving force/moments to the patients. Correlatively, in kinematics the exoskeletons are required to be compatible with human lower limbs and thereby to avoid the uncontrollable interactional loads at the human-machine interfaces. Such requirement makes the structure design of exoskeletons very difficult because the human-machine closed chains are complicated. In addition, both the axis misalignments and the kinematic character difference between the exoskeleton and human joints should be taken into account. By analyzing the DOF(degree of freedom) of the whole human-machine closed chain, the human-machine kinematic incompatibility of lower limb exoskeletons is studied. An effective method for the structure design of lower limb exoskeletons, which are kinematically compatible with human lower limb, is proposed. Applying this method, the structure synthesis of the lower limb exoskeletons containing only one-DOF revolute and prismatic joints is investigated; the feasible basic structures of exoskeletons are developed and classified into three different categories. With the consideration of quasi-anthropopathic feature, structural simplicity and wearable comfort of lower limb exoskeletons, a joint replacement and structure comparison based approach to select the ideal structures of lower limb exoskeletons is proposed, by which three optimal exoskeleton structures are obtained. This paper indicates that the human-machine closed chain formed by the exoskeleton and human lower limb should be an even-constrained kinematic system in order to avoid the uncontrollable human-machine interactional loads. The presented method for the structure design of lower limb exoskeletons is universal and simple, and hence can be applied to other kinds of wearable exoskeletons.
NASA Astrophysics Data System (ADS)
Huang, Zhaohui; Huang, Xiemin
2018-04-01
This paper, firstly, introduces the application trend of the integration of multi-channel interactions in automotive HMI ((Human Machine Interface) from complex information models faced by existing automotive HMI and describes various interaction modes. By comparing voice interaction and touch screen, gestures and other interaction modes, the potential and feasibility of voice interaction in automotive HMI experience design are concluded. Then, the related theories of voice interaction, identification technologies, human beings' cognitive models of voices and voice design methods are further explored. And the research priority of this paper is proposed, i.e. how to design voice interaction to create more humane task-oriented dialogue scenarios to enhance interactive experiences of automotive HMI. The specific scenarios in driving behaviors suitable for the use of voice interaction are studied and classified, and the usability principles and key elements for automotive HMI voice design are proposed according to the scenario features. Then, through the user participatory usability testing experiment, the dialogue processes of voice interaction in automotive HMI are defined. The logics and grammars in voice interaction are classified according to the experimental results, and the mental models in the interaction processes are analyzed. At last, the voice interaction design method to create the humane task-oriented dialogue scenarios in the driving environment is proposed.
49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design
Code of Federal Regulations, 2013 CFR
2013-10-01
... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...
49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design
Code of Federal Regulations, 2010 CFR
2010-10-01
... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...
49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design
Code of Federal Regulations, 2014 CFR
2014-10-01
... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...
49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design
Code of Federal Regulations, 2011 CFR
2011-10-01
... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...
49 CFR Appendix E to Part 236 - Human-Machine Interface (HMI) Design
Code of Federal Regulations, 2012 CFR
2012-10-01
... operator to change position; (4) Arrange controls according to their expected order of use; (5) Group similar controls together; (6) Design for high stimulus-response compatibility (geometric and conceptual); (7) Design safety-critical controls to require more than one positive action to activate (e.g., auto...
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1998-01-01
Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process.
Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles
NASA Technical Reports Server (NTRS)
Queen, Steven M.; Sanner, Kurt Gregory
2011-01-01
One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.
Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.
Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen
2012-01-01
An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Human factors model concerning the man-machine interface of mining crewstations
NASA Technical Reports Server (NTRS)
Rider, James P.; Unger, Richard L.
1989-01-01
The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.
1990-03-01
decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal
Learning Machine, Vietnamese Based Human-Computer Interface.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…
Human factors issues in telerobotic systems for Space Station Freedom servicing
NASA Technical Reports Server (NTRS)
Malone, Thomas B.; Permenter, Kathryn E.
1990-01-01
Requirements for Space Station Freedom servicing are described and the state-of-the-art for telerobotic system on-orbit servicing of spacecraft is defined. The projected requirements for the Space Station Flight Telerobotic Servicer (FTS) are identified. Finally, the human factors issues in telerobotic servicing are discussed. The human factors issues are basically three: the definition of the role of the human versus automation in system control; the identification of operator-device interface design requirements; and the requirements for development of an operator-machine interface simulation capability.
Embedded Control System for Smart Walking Assistance Device.
Bosnak, Matevz; Skrjanc, Igor
2017-03-01
This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.
Cao, Ran; Pu, Xianjie; Du, Xinyu; Yang, Wei; Wang, Jiaona; Guo, Hengyu; Zhao, Shuyu; Yuan, Zuqing; Zhang, Chi; Li, Congju; Wang, Zhong Lin
2018-05-22
Multifunctional electronic textiles (E-textiles) with embedded electric circuits hold great application prospects for future wearable electronics. However, most E-textiles still have critical challenges, including air permeability, satisfactory washability, and mass fabrication. In this work, we fabricate a washable E-textile that addresses all of the concerns and shows its application as a self-powered triboelectric gesture textile for intelligent human-machine interfacing. Utilizing conductive carbon nanotubes (CNTs) and screen-printing technology, this kind of E-textile embraces high conductivity (0.2 kΩ/sq), high air permeability (88.2 mm/s), and can be manufactured on common fabric at large scales. Due to the advantage of the interaction between the CNTs and the fabrics, the electrode shows excellent stability under harsh mechanical deformation and even after being washed. Moreover, based on a single-electrode mode triboelectric nanogenerator and electrode pattern design, our E-textile exhibits highly sensitive touch/gesture sensing performance and has potential applications for human-machine interfacing.
The use of affective interaction design in car user interfaces.
Gkouskos, Dimitrios; Chen, Fang
2012-01-01
Recent developments in the car industry have put Human Machine Interfaces under the spotlight. Developing gratifying human-car interactions has become one of the more prominent areas that car manufacturers want to invest in. However, concepts like emotional design remain foreign to the industry. In this study 12 experts on the field of automobile HMI design were interviewed in order to investigate their needs and opinions of emotional design. Results show that emotional design has yet to be introduced for this context of use. Designers need a tool customized for the intricacies of the car HMI field that can provide them with support and guidance so that they can create emotionally attractive experiences for drivers and passengers alike.
Generating a Reduced Gravity Environment on Earth
NASA Technical Reports Server (NTRS)
Dungan, Larry K.; Cunningham, Tom; Poncia, Dina
2010-01-01
Since the 1950s several reduced gravity simulators have been designed and utilized in preparing humans for spaceflight and in reduced gravity system development. The Active Response Gravity Offload System (ARGOS) is the newest and most realistic gravity offload simulator. ARGOS provides three degrees of motion within the test area and is scalable for full building deployment. The inertia of the overhead system is eliminated by an active motor and control system. This presentation will discuss what ARGOS is, how it functions, and the unique challenges of interfacing to the human. Test data and video for human and robotic systems will be presented. A major variable in the human machine interaction is the interface of ARGOS to the human. These challenges along with design solutions will be discussed.
Multiple man-machine interfaces
NASA Technical Reports Server (NTRS)
Stanton, L.; Cook, C. W.
1981-01-01
The multiple man machine interfaces inherent in military pilot training, their social implications, and the issue of possible negative feedback were explored. Modern technology has produced machines which can see, hear, and touch with greater accuracy and precision than human beings. Consequently, the military pilot is more a systems manager, often doing battle against a target he never sees. It is concluded that unquantifiable human activity requires motivation that is not intrinsic in a machine.
NASA Technical Reports Server (NTRS)
Mitchell, C. M.
1982-01-01
The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1990-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1992-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus
2015-08-01
To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Knowledge-based load leveling and task allocation in human-machine systems
NASA Technical Reports Server (NTRS)
Chignell, M. H.; Hancock, P. A.
1986-01-01
Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.
Human Factors in Accidents Involving Remotely Piloted Aircraft
NASA Technical Reports Server (NTRS)
Merlin, Peter William
2013-01-01
This presentation examines human factors that contribute to RPA mishaps and provides analysis of lessons learned. RPA accident data from U.S. military and government agencies were reviewed and analyzed to identify human factors issues. Common contributors to RPA mishaps fell into several major categories: cognitive factors (pilot workload), physiological factors (fatigue and stress), environmental factors (situational awareness), staffing factors (training and crew coordination), and design factors (human machine interface).
EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.
Yin, Yue H; Fan, Yuan J; Xu, Li D
2012-07-01
Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.
1992-10-01
Manager , Advanced Transport Operating Systems Program Office Langley Research Center Mail Stop 265 Hampton, VA 23665-5225 United States Programme Committee...J.H.Lind, and C.G.Burge Advanced Cockpit - Mission and Image Management 4 by J. Struck Aircrew Acceptance of Automation in the Cockpit 5 by M. Hicks and I...DESIGN CONCEPTS AND TOOLS A Systems Approach to the Advanced Aircraft Man-Machine Interface 23 by F. Armogida Management of Avionics Data in the Cockpit
NASA Technical Reports Server (NTRS)
Potter, William J.; Mitchell, Christine M.
1993-01-01
Historically, command management systems (CMS) have been large and expensive spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as to develop a more generic CMS system. New technologies, in addition to a core CMS common to a range of spacecraft, may facilitate the training and enhance the efficiency of CMS operations. Current mission operations center (MOC) hardware and software include Unix workstations, the C/C++ programming languages, and an X window interface. This configuration provides the power and flexibility to support sophisticated and intelligent user interfaces that exploit state-of-the-art technologies in human-machine interaction, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of these issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, human-machine systems design and analysis tools (e.g., operator and designer models), and human-computer interaction tools (e.g., graphics, visualization, and animation) may provide significant savings in the design, operation, and maintenance of the CMS for a specific spacecraft as well as continuity for CMS design and development across spacecraft. The first six months of this research saw a broad investigation by Georgia Tech researchers into the function, design, and operation of current and planned command management systems at Goddard Space Flight Center. As the first step, the researchers attempted to understand the current and anticipated horizons of command management systems at Goddard. Preliminary results are given on CMS commonalities and causes of low re-use, and methods are proposed to facilitate increased re-use.
NASA Astrophysics Data System (ADS)
Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso
2014-06-01
New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.
Considerations for human-machine interfaces in tele-operations
NASA Technical Reports Server (NTRS)
Newport, Curt
1991-01-01
Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.
All printed touchless human-machine interface based on only five functional materials
NASA Astrophysics Data System (ADS)
Scheipl, G.; Zirkl, M.; Sawatdee, A.; Helbig, U.; Krause, M.; Kraker, E.; Andersson Ersman, P.; Nilsson, D.; Platt, D.; Bodö, P.; Bauer, S.; Domann, G.; Mogessie, A.; Hartmann, Paul; Stadlober, B.
2012-02-01
We demonstrate the printing of a complex smart integrated system using only five functional inks: the fluoropolymer P(VDF:TrFE) (Poly(vinylidene fluoride trifluoroethylene) sensor ink, the conductive polymer PEDOT:PSS (poly(3,4 ethylenedioxythiophene):poly(styrene sulfonic acid) ink, a conductive carbon paste, a polymeric electrolyte and SU8 for separation. The result is a touchless human-machine interface, including piezo- and pyroelectric sensor pixels (sensitive to pressure changes and impinging infrared light), transistors for impedance matching and signal conditioning, and an electrochromic display. Applications may not only emerge in human-machine interfaces, but also in transient temperature or pressure sensing used in safety technology, in artificial skins and in disposable sensor labels.
FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN
NASA Astrophysics Data System (ADS)
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
2014-06-01
The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.
Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF
NASA Astrophysics Data System (ADS)
Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James
A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.
Massachusetts Institute of Technology Consortium Agreement
1999-03-01
This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.
Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.
ERIC Educational Resources Information Center
Acker, Stephen R.
1986-01-01
This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)
Measuring human performance on NASA's microgravity aircraft
NASA Technical Reports Server (NTRS)
Morris, Randy B.; Whitmore, Mihriban
1993-01-01
Measuring human performance in a microgravity environment will aid in identifying the design requirements, human capabilities, safety, and productivity of future astronauts. The preliminary understanding of the microgravity effects on human performance can be achieved through evaluations conducted onboard NASA's KC-135 aircraft. These evaluations can be performed in relation to hardware performance, human-hardware interface, and hardware integration. Measuring human performance in the KC-135 simulated environment will contribute to the efforts of optimizing the human-machine interfaces for future and existing space vehicles. However, there are limitations, such as limited number of qualified subjects, unexpected hardware problems, and miscellaneous plane movements which must be taken into consideration. Examples for these evaluations, the results, and their implications are discussed in the paper.
Human-machine interface for a VR-based medical imaging environment
NASA Astrophysics Data System (ADS)
Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans
1997-05-01
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
NASA Astrophysics Data System (ADS)
Fern, Lisa Carolynn
This dissertation examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will be deployed into complex systems. A key question for new technologies with increasingly capable automation, is how work will be accomplished by human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by designers. The human machine interface (HMI), which is intended to facilitate human-machine interaction and cooperation, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the predicted performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements for a detect and avoid (DAA) system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to a lack of exploration of different approaches to human-automation cooperation. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed. This document concludes with a look at both the importance of, and the challenges facing, the inclusion of examining human-automation coordination issues as part of the safety assurance activities of new technologies.
MARTI: man-machine animation real-time interface
NASA Astrophysics Data System (ADS)
Jones, Christian M.; Dlay, Satnam S.
1997-05-01
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi
2015-03-01
This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.
Adapting human-machine interfaces to user performance.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2008-01-01
The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.
Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.
Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís
2010-01-01
This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.
Diverse applications of advanced man-telerobot interfaces
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A.
1991-01-01
Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.
Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface
Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong
2017-01-01
There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485
Kinematic design to improve ergonomics in human machine interaction.
Schiele, André; van der Helm, Frans C T
2006-12-01
This paper introduces a novel kinematic design paradigm for ergonomic human machine interaction. Goals for optimal design are formulated generically and applied to the mechanical design of an upper-arm exoskeleton. A nine degree-of-freedom (DOF) model of the human arm kinematics is presented and used to develop, test, and optimize the kinematic structure of an human arm interfacing exoskeleton. The resulting device can interact with an unprecedented portion of the natural limb workspace, including motions in the shoulder-girdle, shoulder, elbow, and the wrist. The exoskeleton does not require alignment to the human joint axes, yet is able to actuate each DOF of our redundant limb unambiguously and without reaching into singularities. The device is comfortable to wear and does not create residual forces if misalignments exist. Implemented in a rehabilitation robot, the design features of the exoskeleton could enable longer lasting training sessions, training of fully natural tasks such as activities of daily living and shorter dress-on and dress-off times. Results from inter-subject experiments with a prototype are presented, that verify usability over the entire workspace of the human arm, including shoulder and shoulder girdle.
Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max
2017-08-01
Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.
Research in image management and access
NASA Technical Reports Server (NTRS)
Vondran, Raymond F.; Barron, Billy J.
1993-01-01
Presently, the problem of over-all library system design has been compounded by the accretion of both function and structure to a basic framework of requirements. While more device power has led to increased functionality, opportunities for reducing system complexity at the user interface level have not always been pursued with equal zeal. The purpose of this book is therefore to set forth and examine these opportunities, within the general framework of human factors research in man-machine interfaces. Human factors may be viewed as a series of trade-off decisions among four polarized objectives: machine resources and user specifications; functionality and user requirements. In the past, a limiting factor was the availability of systems. However, in the last two years, over one hundred libraries supported by many different software configurations have been added to the Internet. This document includes a statistical analysis of human responses to five Internet library systems by key features, development of the ideal online catalog system, and ideal online catalog systems for libraries and information centers.
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Future developments in brain-machine interface research.
Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L
2011-01-01
Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition.
Optical HMI with biomechanical energy harvesters integrated in textile supports
NASA Astrophysics Data System (ADS)
De Pasquale, G.; Kim, SG; De Pasquale, D.
2015-12-01
This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.
Integration Telegram Bot on E-Complaint Applications in College
NASA Astrophysics Data System (ADS)
Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.
2018-01-01
Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.
Automated visual imaging interface for the plant floor
NASA Astrophysics Data System (ADS)
Wutke, John R.
1991-03-01
The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.
People, planners and policy: is there an interface?
Susan Kopka
1979-01-01
This research attempts to isolate some of the dimensions of human evaluations/perceptions of the built environment through the use of an Audience Response Machine and a video tape of environmental scenes. The results suggest that there are commonalities in peoples' evaluations/perceptions and that this type of inquiry has prescriptive value for design/planning....
We can't explore space without it - Common human space needs for exploration spaceflight
NASA Technical Reports Server (NTRS)
Daues, K. R.; Erwin, H. O.
1992-01-01
An overview is conducted of physiological, psychological, and human-interface requirements for manned spaceflight programs to establish common criteria. Attention is given to the comfort levels relevant to human support in exploration mission spacecraft and planetary habitats, and three comfort levels (CLs) are established. The levels include: (1) CL-1 for basic crew life support; (2) CL-2 for enabling the nominal completion of mission science; and (3) CL-3 which provides for enhanced life support and user-friendly interface systems. CL-2 support systems can include systems for EVA, workstations, and activity centers for repairs and enhanced utilization of payload and human/machine integration. CL-3 supports can be useful for maintaining crew psychological and physiological health as well as the design of comfortable and earthlike surroundings. While all missions require CL-1 commonality, CL-2 commonality is required only for EVA systems, display nomenclature, and restraint designs.
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
Human Machine Interface Programming and Testing
NASA Technical Reports Server (NTRS)
Foster, Thomas Garrison
2013-01-01
Human Machine Interface (HMI) Programming and Testing is about creating graphical displays to mimic mission critical ground control systems in order to provide NASA engineers with the ability to monitor the health management of these systems in real time. The Health Management System (HMS) is an online interactive human machine interface system that monitors all Kennedy Ground Control Subsystem (KGCS) hardware in the field. The Health Management System is essential to NASA engineers because it allows remote control and monitoring of the health management systems of all the Programmable Logic Controllers (PLC) and associated field devices. KGCS will have equipment installed at the launch pad, Vehicle Assembly Building, Mobile Launcher, as well as the Multi-Purpose Processing Facility. I am designing graphical displays to monitor and control new modules that will be integrated into the HMS. The design of the display screen will closely mimic the appearance and functionality of the actual modules. There are many different field devices used to monitor health management and each device has its own unique set of health management related data, therefore each display must also have its own unique way to display this data. Once the displays are created, the RSLogix5000 application is used to write software that maps all the required data read from the hardware to the graphical display. Once this data is mapped to its corresponding display item, the graphical display and hardware device will be connected through the same network in order to test all possible scenarios and types of data the graphical display was designed to receive. Test Procedures will be written to thoroughly test out the displays and ensure that they are working correctly before being deployed to the field. Additionally, the Kennedy Ground Controls Subsystem's user manual will be updated to explain to the NASA engineers how to use the new module displays.
Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.
McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A
2017-06-28
To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.
Application of the user-centred design process according ISO 9241-210 in air traffic control.
König, Christina; Hofmann, Thomas; Bruder, Ralph
2012-01-01
Designing a usable human machine interface for air traffic control is challenging and should follow approved methods. The ISO 9241-210 standard promises high usability of products by integrating future users and following an iterative process. This contribution describes the proceeding and first results of the analysis and application of ISO 9241-210 to develop a planning tool for air traffic controllers.
Automatic Speech Recognition in Air Traffic Control: a Human Factors Perspective
NASA Technical Reports Server (NTRS)
Karlsson, Joakim
1990-01-01
The introduction of Automatic Speech Recognition (ASR) technology into the Air Traffic Control (ATC) system has the potential to improve overall safety and efficiency. However, because ASR technology is inherently a part of the man-machine interface between the user and the system, the human factors issues involved must be addressed. Here, some of the human factors problems are identified and related methods of investigation are presented. Research at M.I.T.'s Flight Transportation Laboratory is being conducted from a human factors perspective, focusing on intelligent parser design, presentation of feedback, error correction strategy design, and optimal choice of input modalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.R. McJunkin; R.L. Boring; M.A. McQueen
Situational awareness in the operations and supervision of a industrial system means that decision making entity, whether machine or human, have the important data presented in a timely manner. An optimal presentation of information such that the operator has the best opportunity accurately interpret and react to anomalies due to system degradation, failures or adversaries. Anticipated problems are a matter for system design; however, the paper will focus on concepts for situational awareness enhancement for a human operator when the unanticipated or unaddressed event types occur. Methodology for human machine interface development and refinement strategy is described for a syntheticmore » fuels plant model. A novel concept for adaptively highlighting the most interesting information in the system and a plan for testing the methodology is described.« less
Future developments in brain-machine interface research
Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L
2011-01-01
Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition. PMID:21779720
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
A Conceptual Framework for Predicting Error in Complex Human-Machine Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)
1998-01-01
We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.
NASA Technical Reports Server (NTRS)
Mount, Frances; Foley, Tico
1999-01-01
Human Factors Engineering, often referred to as Ergonomics, is a science that applies a detailed understanding of human characteristics, capabilities, and limitations to the design, evaluation, and operation of environments, tools, and systems for work and daily living. Human Factors is the investigation, design, and evaluation of equipment, techniques, procedures, facilities, and human interfaces, and encompasses all aspects of human activity from manual labor to mental processing and leisure time enjoyments. In spaceflight applications, human factors engineering seeks to: (1) ensure that a task can be accomplished, (2) maintain productivity during spaceflight, and (3) ensure the habitability of the pressurized living areas. DSO 904 served as a vehicle for the verification and elucidation of human factors principles and tools in the microgravity environment. Over six flights, twelve topics were investigated. This study documented the strengths and limitations of human operators in a complex, multifaceted, and unique environment. By focusing on the man-machine interface in space flight activities, it was determined which designs allow astronauts to be optimally productive during valuable and costly space flights. Among the most promising areas of inquiry were procedures, tools, habitat, environmental conditions, tasking, work load, flexibility, and individual control over work.
A Human Factors Perspective on Alarm System Research and Development 2000 to 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curt Braun; John Grimes; Eric Shaver
By definition, alarms serve to notify human operators of out-of-parameter conditions that could threaten equipment, the environment, product quality and, of course, human life. Given the complexities of industrial systems, human machine interfaces, and the human operator, the understanding of how alarms and humans can best work together to prevent disaster is continually developing. This review examines advances in alarm research and development from 2000 to 2010 and includes the writings of trade professionals, engineering and human factors researchers, and standards organizations with the goal of documenting advances in alarms system design, research, and implementation.
Man-Machine Interface (MMI) Requirements Definition and Design Guidelines
1981-02-01
be provided to interrogate the user to resolve any input ambiguities resulting from hardware limitations; see Smith and Goodwin, 1971 . Reference...Smith, S. L. and Goodwin, N. C’. Alphabetic data v entry via the Touch-Tone pad: A comment. Human Factors, 1971 , 13(2), 189-190. 41 All~ 1.0 General (con...software designer. Reference: Miller, R. B. Response time in man-computer conversational transactions. In Proceedings of the AFIPS kall Joint Computer
Chan, Chetwyn C H; Wong, Alex W K; Lee, Tatia M C; Chi, Iris
2009-03-01
The goal of this study was to enhance an existing automated teller machine (ATM) human-machine interface in order to accommodate the needs of older adults. Older adults were involved in the design and field test of the modified ATM prototype. The design of the user interface and functionality took the cognitive and physical abilities of older adults into account. The modified ATM system included only "cash withdrawal" and "transfer" functions based on the task demands and needs for services of older adults. One hundred and forty-one older adults (aged 60 or above) participated in the field test by operating modified or existing ATM systems. Those who operated the modified system were found to have significantly higher success rates than those who operated the existing system. The enhancement was most significant among older adults who had lower ATM-related abilities, a lower level of education, and no prior experience of using ATMs. This study demonstrates the usefulness of using a universal design and participatory approach to modify the existing ATM system for use by older adults. However, it also leads to a reduction in functionality of the enhanced system. Future studies should explore ways to develop a universal design ATM system which can satisfy the abilities and needs of all users in the entire population.
Reverse-micelle-induced porous pressure-sensitive rubber for wearable human-machine interfaces.
Jung, Sungmook; Kim, Ji Hoon; Kim, Jaemin; Choi, Suji; Lee, Jongsu; Park, Inhyuk; Hyeon, Taeghwan; Kim, Dae-Hyeong
2014-07-23
A novel method to produce porous pressure-sensitive rubber is developed. For the controlled size distribution of embedded micropores, solution-based procedures using reverse micelles are adopted. The piezosensitivity of the pressure sensitive rubber is significantly increased by introducing micropores. Using this method, wearable human-machine interfaces are fabricated, which can be applied to the remote control of a robot. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor; ...
2018-05-08
Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cousineau, Justine Emily; Bennion, Kevin S.; Chieduko, Victor
Cooling of electric machines is a key to increasing power density and improving reliability. This paper focuses on the design of a machine using a cooling jacket wrapped around the stator. The thermal contact resistance (TCR) between the electric machine stator and cooling jacket is a significant factor in overall performance and is not well characterized. This interface is typically an interference fit subject to compressive pressure exceeding 5 MPa. An experimental investigation of this interface was carried out using a thermal transmittance setup using pressures between 5 and 10 MPa. Furthermore, the results were compared to currently available modelsmore » for contact resistance, and one model was adapted for prediction of TCR in future motor designs.« less
Re-Design and Beat Testing of the Man-Machine Integration Design and Analysis System: MIDAS
NASA Technical Reports Server (NTRS)
Shively, R. Jay; Rutkowski, Michael (Technical Monitor)
1999-01-01
The Man-machine Design and Analysis System (MIDAS) is a human factors design and analysis system that combines human cognitive models with 3D CAD models and rapid prototyping and simulation techniques. MIDAS allows designers to ask 'what if' types of questions early in concept exploration and development prior to actual hardware development. The system outputs predictions of operator workload, situational awareness and system performance as well as graphical visualization of the cockpit designs interacting with models of the human in a mission scenario. Recently, MIDAS was re-designed to enhance functionality and usability. The goals driving the redesign include more efficient processing, GUI interface, advances in the memory structures, implementation of external vision models and audition. These changes were detailed in an earlier paper. Two Beta test sites with diverse applications have been chosen. One Beta test site is investigating the development of a new airframe and its interaction with the air traffic management system. The second Beta test effort will investigate 3D auditory cueing in conjunction with traditional visual cueing strategies including panel-mounted and heads-up displays. The progress and lessons learned on each of these projects will be discussed.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Software and Human-Machine Interface Development for Environmental Controls Subsystem Support
NASA Technical Reports Server (NTRS)
Dobson, Matthew
2018-01-01
The Space Launch System (SLS) is the next premier launch vehicle for NASA. It is the next stage of manned space exploration from American soil, and will be the platform in which we push further beyond Earth orbit. In preparation of the SLS maiden voyage on Exploration Mission 1 (EM-1), the existing ground support architecture at Kennedy Space Center required significant overhaul and updating. A comprehensive upgrade of controls systems was necessary, including programmable logic controller software, as well as Launch Control Center (LCC) firing room and local launch pad displays for technician use. Environmental control acts as an integral component in these systems, being the foremost system for conditioning the pad and extremely sensitive launch vehicle until T-0. The Environmental Controls Subsystem (ECS) required testing and modification to meet the requirements of the designed system, as well as the human factors requirements of NASA software for Validation and Verification (V&V). This term saw significant strides in the progress and functionality of the human-machine interfaces used at the launch pad, and improved integration with the controller code.
Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy
1999-01-01
communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human
Tactual interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Srinivasan, M. A.
1991-01-01
Increasingly complex human-machine interactions, such as in teleoperation or in virtual environments, have necessitated the optimal use of the human tactual channel for information transfer. This need leads to a demand for a basic understanding of how the human tactual system works, so that the tactual interface between the human and the machine can receive the command signals from the human, as well as display the information to the human, in a manner that appears natural to the human. The tactual information consists of two components: (1) contact information which specifies the nature of direct contact with the object; and (2) kinesthetic information which refers to the position and motion of the limbs. This paper is mostly concerned with contact information.
ERIC Educational Resources Information Center
Kong, Siu Cheung; Yeung, Yau Yuen; Wu, Xian Qiu
2009-01-01
In order to facilitate senior primary school students in Hong Kong to engage in learning by observation of the phenomena related to electrical circuits, a design of a specific courseware system, of which the interactive human-machine interface was created with the use of an open-source software called the LabVNC, for conducting online…
Designing Microstructures/Structures for Desired Functional Material and Local Fields
2015-12-02
utilized to engineer multifunctional soft materials for multi-sensing, multi- actuating , human-machine interfaces. [3] Establish a theoretical framework...model for surface elasticity, (ii) derived a new type of Maxwell stress in soft materials due to quantum mechanical-elasticity coupling and...elucidated its ramification in engineering multifunctional soft materials, and (iii) demonstrated the possibility of concurrent magnetoelectricity and
Using machine learning to emulate human hearing for predictive maintenance of equipment
NASA Astrophysics Data System (ADS)
Verma, Dinesh; Bent, Graham
2017-05-01
At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
NASA Astrophysics Data System (ADS)
Whitestone, Jennifer J.; Geisen, Glen R.; McQuiston, Barbara K.
1997-03-01
Anthropometric surveys conducted by the military provide comprehensive human body measurement data that are human interface requirements for successful mission performance of weapon systems, including cockpits, protective equipment, and clothing. The application of human body dimensions to model humans and human-machine performance begins with engineering anthropometry. There are two critical elements to engineering anthropometry: data acquisition and data analysis. First, the human body is captured dimensionally with either traditional anthropometric tools, such as calipers and tape measures, or with advanced image acquisition systems, such as a laser scanner. Next, numerous statistical analysis tools, such as multivariate modeling and feature envelopes, are used to effectively transition these data for design and evaluation of equipment and work environments. Recently, Air Force technology transfer allowed researchers at the Computerized Anthropometric Research and Design (CARD) Laboratory at Wright-Patterson Air Force Base to work with the Dayton, Ohio area medical community in assessing the rate of wound healing and improving the fit of total contract burn masks. This paper describes the successful application of CARD Lab engineering anthropometry to two medically oriented human interface problems.
The desktop interface in intelligent tutoring systems
NASA Technical Reports Server (NTRS)
Baudendistel, Stephen; Hua, Grace
1987-01-01
The interface between an Intelligent Tutoring System (ITS) and the person being tutored is critical to the success of the learning process. If the interface to the ITS is confusing or non-supportive of the tutored domain, the effectiveness of the instruction will be diminished or lost entirely. Consequently, the interface to an ITS should be highly integrated with the domain to provide a robust and semantically rich learning environment. In building an ITS for ZetaLISP on a LISP Machine, a Desktop Interface was designed to support a programming learning environment. Using the bitmapped display, windows, and mouse, three desktops were designed to support self-study and tutoring of ZetaLISP. Through organization, well-defined boundaries, and domain support facilities, the desktops provide substantial flexibility and power for the student and facilitate learning ZetaLISP programming while screening the student from the complex LISP Machine environment. The student can concentrate on learning ZetaLISP programming and not on how to operate the interface or a LISP Machine.
Modifications to Optimize the AH-1Z Human Machine Interface
2013-04-18
accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand...design flaws and guide future design and integration of increased capability. Additionally, employment of material solutions to provide aircrew with the...accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body
Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo
2016-01-01
Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body—because human tissues exhibit some conductivity at these frequencies—resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard. PMID:27918416
A Touch Sensing Technique Using the Effects of Extremely Low Frequency Fields on the Human Body.
Elfekey, Hatem; Bastawrous, Hany Ayad; Okamoto, Shogo
2016-12-02
Touch sensing is a fundamental approach in human-to-machine interfaces, and is currently under widespread use. Many current applications use active touch sensing technologies. Passive touch sensing technologies are, however, more adequate to implement low power or energy harvesting touch sensing interfaces. This paper presents a passive touch sensing technique based on the fact that the human body is affected by the surrounding extremely low frequency (ELF) electromagnetic fields, such as those of AC power lines. These external ELF fields induce electric potentials on the human body-because human tissues exhibit some conductivity at these frequencies-resulting in what is called AC hum. We therefore propose a passive touch sensing system that detects this hum noise when a human touch occurs, thus distinguishing between touch and non-touch events. The effectiveness of the proposed technique is validated by designing and implementing a flexible touch sensing keyboard.
A Machine Learning System for Analyzing Human Tactics in a Game
NASA Astrophysics Data System (ADS)
Ito, Hirotaka; Tanaka, Toshimitsu; Sugie, Noboru
In order to realize advanced man-machine interfaces, it is desired to develop a system that can infer the mental state of human users and then return appropriate responses. As the first step toward the above goal, we developed a system capable of inferring human tactics in a simple game played between the system and a human. We present a machine learning system that plays a color expectation game. The system infers the tactics of the opponent, and then decides the action based on the result. We employed a modified version of classifier system like XCS in order to design the system. In addition, three methods are proposed in order to accelerate the learning rate. They are a masking method, an iterative method, and tactics templates. The results of computer experiments confirmed that the proposed methods effectively accelerate the machine learning. The masking method and the iterative method are effective to a simple strategy that considers only a part of past information. However, study speed of these methods is not enough for the tactics that refers to a lot of past information. For the case, the tactics template was able to settle the study rapidly when the tactics is identified.
Qualitative CFD for Rapid Learning in Industrial and Academic Applications
NASA Astrophysics Data System (ADS)
Variano, Evan
2010-11-01
We present a set of tools that allow CFD to be used at an early stage in the design process. Users can rapidly explore the qualitative aspects of fluid flow using real-time simulations that react immediately to design changes. This can guide the design process by fostering an intuitive understanding of fluid dynamics at the prototyping stage. We use an extremely stable Navier-Stokes solver that is available commercially (and free to academic users) plus a custom user interface. The code is designed for the animation and gaming industry, and we exploit the powerful graphical display capabilities to develop a unique human-machine interface. This interface allows the user to efficiently explore the flow in 3D + real time, fostering an intuitive understanding of steady and unsteady flow patterns. There are obvious extensions to use in an academic setting. The trade-offs between accuracy and speed will be discussed in the context of CFD's role in design and education.
Designing Guiding Systems for Brain-Computer Interfaces
Kosmyna, Nataliya; Lécuyer, Anatole
2017-01-01
Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
NASA Technical Reports Server (NTRS)
Wilber, George F.
2017-01-01
This Software Description Document (SDD) captures the design for developing the Flight Interval Management (FIM) system Configurable Graphics Display (CGD) software. Specifically this SDD describes aspects of the Boeing CGD software and the surrounding context and interfaces. It does not describe the Honeywell components of the CGD system. The SDD provides the system overview, architectural design, and detailed design with all the necessary information to implement the Boeing components of the CGD software and integrate them into the CGD subsystem within the larger FIM system. Overall system and CGD system-level requirements are derived from the CGD SRS (in turn derived from the Boeing System Requirements Design Document (SRDD)). Display and look-and-feel requirements are derived from Human Machine Interface (HMI) design documents and working group recommendations. This Boeing CGD SDD is required to support the upcoming Critical Design Review (CDR).
1992-05-01
especially true for friend-enemy or danger-safe designations. Dots, dashes, shapes, and video effects are recommended. Care must be taken to avoid visual...MAY 92 10.3 Screen Design - Format 10.3.1.4 Use of Contrasting Features Use contrasting features such as inverse video and color to call attention to...captions. Do not use reverse video or highlighting for labels. 13.2.3.2 Formatting For single fields, locate the caption to the left of the entry fields
ODISEES: A New Paradigm in Data Access
NASA Astrophysics Data System (ADS)
Huffer, E.; Little, M. M.; Kusterer, J.
2013-12-01
As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.
Shahrbaf, Shirin; vanNoort, Richard; Mirzakouchaki, Behnam; Ghassemieh, Elaheh; Martin, Nicolas
2013-08-01
The effect of preparation design and the physical properties of the interface lute on the restored machined ceramic crown-tooth complex are poorly understood. The aim of this work was to determine, by means of three-dimensional finite element analysis (3D FEA) the effect of the tooth preparation design and the elastic modulus of the cement on the stress state of the cemented machined ceramic crown-tooth complex. The three-dimensional structure of human premolar teeth, restored with adhesively cemented machined ceramic crowns, was digitized with a micro-CT scanner. An accurate, high resolution, digital replica model of a restored tooth was created. Two preparation designs, with different occlusal morphologies, were modeled with cements of 3 different elastic moduli. Interactive medical image processing software (mimics and professional CAD modeling software) was used to create sophisticated digital models that included the supporting structures; periodontal ligament and alveolar bone. The generated models were imported into an FEA software program (hypermesh version 10.0, Altair Engineering Inc.) with all degrees of freedom constrained at the outer surface of the supporting cortical bone of the crown-tooth complex. Five different elastic moduli values were given to the adhesive cement interface 1.8GPa, 4GPa, 8GPa, 18.3GPa and 40GPa; the four lower values are representative of currently used cementing lutes and 40GPa is set as an extreme high value. The stress distribution under simulated applied loads was determined. The preparation design demonstrated an effect on the stress state of the restored tooth system. The cement elastic modulus affected the stress state in the cement and dentin structures but not in the crown, the pulp, the periodontal ligament or the cancellous and cortical bone. The results of this study suggest that both the choice of the preparation design and the cement elastic modulus can affect the stress state within the restored crown-tooth complex. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Distribution of man-machine controls in space teleoperation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1982-01-01
The distribution of control between man and machine is dependent on the tasks, available technology, human performance characteristics and control goals. This dependency has very specific projections on systems designed for teleoperation in space. This paper gives a brief outline of the space-related issues and presents the results of advanced teleoperator research and development at the Jet Propulsion Laboratory (JPL). The research and development work includes smart sensors, flexible computer controls and intelligent man-machine interface devices in the area of visual displays and kinesthetic man-machine coupling in remote control of manipulators. Some of the development results have been tested at the Johnson Space Center (JSC) using the simulated full-scale Shuttle Remote Manipulator System (RMS). The research and development work for advanced space teleoperation is far from complete and poses many interdisciplinary challenges.
The JPL telerobot operator control station. Part 1: Hardware
NASA Technical Reports Server (NTRS)
Kan, Edwin P.; Tower, John T.; Hunka, George W.; Vansant, Glenn J.
1989-01-01
The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The hardware design, system architecture, and its integration and interface with the rest of the Telerobot Demonstrator System are discussed.
Harper, J G; Fuller, R; Sweeney, D; Waldmann, T
1998-04-01
This paper describes ergonomic issues raised during a project to provide a replacement real-time bus route control system to a large public transport company. Task and system analyses highlighted several deficiencies in the original system architecture, the human-machine interfaces and the general approach to system management. The eventual live prototype replaced the existing original system for a trial evaluation period of several weeks. During this period a number of studies was conducted with the system users in order to measure any improvements the new system, with its ergonomic features, produced over the old. Importantly, the results confirmed that (a) general responsiveness and service quality were improved, and (b) users were more comfortable with the new design. We conclude with a number of caveats which we believe will be useful to any group addressing technology impact in a large organisation.
ERIC Educational Resources Information Center
Weller, Herman G.; Hartson, H. Rex
1992-01-01
Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…
Biosleeve Human-Machine Interface
NASA Technical Reports Server (NTRS)
Assad, Christopher (Inventor)
2016-01-01
Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.
The human role in space. Volume 3: Generalizations on human roles in space
NASA Technical Reports Server (NTRS)
1984-01-01
The human role in space was studied. The role and the degree of direct involvement of humans that will be required in future space missions, was investigated. Valid criteria for allocating functional activities between humans and machines were established. The technology requirements, ecnomics, and benefits of the human presence in space were examined. Factors which affect crew productivity include: internal architecture; crew support; crew activities; LVA systems; IVA/EVA interfaces; and remote systems management. The accomplished work is reported and the data and analyses from which the study results are derived are included. The results provide information and guidelines to enable NASA program managers and decision makers to establish, early in the design process, the most cost effective design approach for future space programs, through the optimal application of unique human skills and capabilities in space.
Human machine interface to manually drive rhombic like vehicles in remote handling operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, Pedro; Vale, Alberto; Ventura, Rodrigo
2015-07-01
In the thermonuclear experimental reactor ITER, a vehicle named CTS is designed to transport a container with activated components inside the buildings. In nominal operations, the CTS is autonomously guided under supervision. However, in some unexpected situations, such as in rescue and recovery operations, the autonomous mode must be overridden and the CTS must be remotely guided by an operator. The CTS is a rhombic-like vehicle, with two drivable and steerable wheels along its longitudinal axis, providing omni-directional capabilities. The rhombic kinematics correspond to four control variables, which are difficult to manage in manual mode operation. This paper proposes amore » Human Machine Interface (HMI) to remotely guide the vehicle in manual mode. The proposed solution is implemented using a HMI with an encoder connected to a micro-controller and an analog 2-axis joystick. Experimental results were obtained comparing the proposed solution with other controller devices in different scenarios and using a software platform that simulates the kinematics and dynamics of the vehicle. (authors)« less
21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...
21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...
21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...
21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...
21 CFR 870.4220 - Cardiopulmonary bypass heart-lung machine console.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cardiopulmonary bypass heart-lung machine console... Cardiopulmonary bypass heart-lung machine console. (a) Identification. A cardiopulmonary bypass heart-lung machine... heart-lung machine. The console is designed to interface with the basic units used in a gas exchange...
Griffiths, Paul G; Gillespie, R Brent
2005-01-01
This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.
TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues
NASA Astrophysics Data System (ADS)
Cohen, Ethan D.
2007-06-01
The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roger Lew; Ronald L. Boring; Thomas A. Ulrich
Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.
Reprint of: Client interfaces to the Virtual Observatory Registry
NASA Astrophysics Data System (ADS)
Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.
2015-06-01
The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.
Client interfaces to the Virtual Observatory Registry
NASA Astrophysics Data System (ADS)
Demleitner, M.; Harrison, P.; Taylor, M.; Normand, J.
2015-04-01
The Virtual Observatory Registry is a distributed directory of information systems and other resources relevant to astronomy. To make it useful, facilities to query that directory must be provided to humans and machines alike. This article reviews the development and status of such facilities, also considering the lessons learnt from about a decade of experience with Registry interfaces. After a brief outline of the history of the standards development, it describes the use of Registry interfaces in some popular clients as well as dedicated UIs for interrogating the Registry. It continues with a thorough discussion of the design of the two most recent Registry interface standards, RegTAP on the one hand and a full-text-based interface on the other hand. The article finally lays out some of the less obvious conventions that emerged in the interaction between providers of registry records and Registry users as well as remaining challenges and current developments.
Man-machine interface requirements - advanced technology
NASA Technical Reports Server (NTRS)
Remington, R. W.; Wiener, E. L.
1984-01-01
Research issues and areas are identified where increased understanding of the human operator and the interaction between the operator and the avionics could lead to improvements in the performance of current and proposed helicopters. Both current and advanced helicopter systems and avionics are considered. Areas critical to man-machine interface requirements include: (1) artificial intelligence; (2) visual displays; (3) voice technology; (4) cockpit integration; and (5) pilot work loads and performance.
A Workshop on the Gathering of Information for Problem Formulation
1991-06-01
the Al specialists is to design "artificially intelligent" computer environments that tutor students in much the same way that a human teacher might...tuning the interface betweeen student and machine, and are using a technique of in situ development to tune the system towaid realistic user needs. 141...of transferability to new domains, while the latter suffers from extreme fragility: the inability to cope with any input not strictly conforming with
Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool
NASA Astrophysics Data System (ADS)
Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong
2016-06-01
The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.
McGloughlin, T M; Murphy, D M; Kavanagh, A G
2004-01-01
Degradation of tibial inserts in vivo has been found to be multifactorial in nature, resulting in a complex interaction of many variables. A range of kinematic conditions occurs at the tibio-femoral interface, giving rise to various degrees of rolling and sliding at this interface. The movement of the tibio-femoral contact point may be an influential factor in the overall wear of ultra-high molecular weight polyethylene (UHMWPE) tibial components. As part of this study a three-station wear-test machine was designed and built to investigate the influence of rolling and sliding on the wear behaviour of specific design aspects of contemporary knee prostheses. Using the machine, it is possible to monitor the effect of various slide roll ratios on the performance of contemporary bearing designs from a geometrical and materials perspective.
Chang, Hochan; Kim, Sungwoong; Jin, Sumin; Lee, Seung-Woo; Yang, Gil-Tae; Lee, Ki-Young; Yi, Hyunjung
2018-01-10
Flexible piezoresistive sensors have huge potential for health monitoring, human-machine interfaces, prosthetic limbs, and intelligent robotics. A variety of nanomaterials and structural schemes have been proposed for realizing ultrasensitive flexible piezoresistive sensors. However, despite the success of recent efforts, high sensitivity within narrower pressure ranges and/or the challenging adhesion and stability issues still potentially limit their broad applications. Herein, we introduce a biomaterial-based scheme for the development of flexible pressure sensors that are ultrasensitive (resistance change by 5 orders) over a broad pressure range of 0.1-100 kPa, promptly responsive (20 ms), and yet highly stable. We show that employing biomaterial-incorporated conductive networks of single-walled carbon nanotubes as interfacial layers of contact-based resistive pressure sensors significantly enhances piezoresistive response via effective modulation of the interlayer resistance and provides stable interfaces for the pressure sensors. The developed flexible sensor is capable of real-time monitoring of wrist pulse waves under external medium pressure levels and providing pressure profiles applied by a thumb and a forefinger during object manipulation at a low voltage (1 V) and power consumption (<12 μW). This work provides a new insight into the material candidates and approaches for the development of wearable health-monitoring and human-machine interfaces.
Advanced warfighter machine interface (Invited Paper)
NASA Astrophysics Data System (ADS)
Franks, Erin
2005-05-01
Future military crewmen may have more individual and shared tasks to complete throughout a mission as a result of smaller crew sizes and an increased number of technology interactions. To maintain reasonable workload levels, the Warfighter Machine Interface (WMI) must provide information in a consistent, logical manner, tailored to the environment in which the soldier will be completing their mission. This paper addresses design criteria for creating an advanced, multi-modal warfighter machine interface for on-the-move mounted operations. The Vetronics Technology Integration (VTI) WMI currently provides capabilities such as mission planning and rehearsal, voice and data communications, and manned/unmanned vehicle payload and mobility control. A history of the crewstation and more importantly, the WMI software will be provided with an overview of requirements and criteria used for completing the design. Multiple phases of field and laboratory testing provide the opportunity to evaluate the design and hardware in stationary and motion environments. Lessons learned related to system usability and user performance are presented with mitigation strategies to be tested in the future.
Flying Unmanned Aircraft: A Pilot's Perspective
NASA Technical Reports Server (NTRS)
Pestana, Mark E.
2011-01-01
The National Aeronautics and Space Administration (NASA) is pioneering various Unmanned Aircraft System (UAS) technologies and procedures which may enable routine access to the National Airspace System (NAS), with an aim for Next Gen NAS. These tools will aid in the development of technologies and integrated capabilities that will enable high value missions for science, security, and defense, and open the door to low-cost, extreme-duration, stratospheric flight. A century of aviation evolution has resulted in accepted standards and best practices in the design of human-machine interfaces, the displays and controls of which serve to optimize safe and efficient flight operations and situational awareness. The current proliferation of non-standard, aircraft-specific flight crew interfaces in UAS, coupled with the inherent limitations of operating UAS without in-situ sensory input and feedback (aural, visual, and vestibular cues), has increased the risk of mishaps associated with the design of the "cockpit." The examples of current non- or sub- standard design features range from "annoying" and "inefficient", to those that are difficult to manipulate or interpret in a timely manner, as well as to those that are "burdensome" and "unsafe." A concerted effort is required to establish best practices and standards for the human-machine interfaces, for the pilot as well as the air traffic controller. In addition, roles, responsibilities, knowledge, and skill sets are subject to redefining the terms, "pilot" and "air traffic controller", with respect to operating UAS, especially in the Next-Gen NAS. The knowledge, skill sets, training, and qualification standards for UAS operations must be established, and reflect the aircraft-specific human-machine interfaces and control methods. NASA s recent experiences flying its MQ-9 Ikhana in the NAS for extended duration, has enabled both NASA and the FAA to realize the full potential for UAS, as well as understand the implications of current limitations. Ikhana is a Predator-B/Reaper UAS, built by General Atomics, Aeronautical Systems, Inc., and modified for research. Since 2007, the aircraft has been flown seasonally with a wing-mounted pod containing an infrared scanner, utilized to provide real-time wildfire geo-location data to various fire-fighting agencies in the western U.S. The multi-agency effort included an extensive process to obtain flight clearance from the FAA to operate under special provisions, given that UAS in general do not fully comply with current airspace regulations (e.g. sense-and-avoid requirements).
Interface Metaphors for Interactive Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Robert J.; Blaha, Leslie M.
To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less
A force-controllable macro-micro manipulator and its application to medical robots
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Uecker, Darrin R.; Wang, Yulun
1994-01-01
This paper describes an 8-degrees-of-freedom macro-micro robot. This robot is capable of performing tasks that require accurate force control, such as polishing, finishing, grinding, deburring, and cleaning. The design of the macro-micro mechanism, the control algorithms, and the hardware/software implementation of the algorithms are described in this paper. Initial experimental results are reported. In addition, this paper includes a discussion of medical surgery and the role that force control may play. We introduce a new class of robotic systems collectively called Robotic Enhancement Technology (RET). RET systems introduce the combination of robotic manipulation with human control to perform manipulation tasks beyond the individual capability of either human or machine. The RET class of robotic systems offers new challenges in mechanism design, control-law development, and man/machine interface design. We believe force-controllable mechanisms such as the macro-micro structure we have developed are a necessary part of RET. Work in progress in the area of RET systems and their application to minimally invasive surgery is presented, along with future research directions.
Miller, Christopher A; Parasuraman, Raja
2007-02-01
To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Rayport, Jeffrey F; Jaworski, Bernard J
2004-12-01
Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.
Reflections on human error - Matters of life and death
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1989-01-01
The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.
Research interface on a programmable ultrasound scanner.
Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin
2008-07-01
Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Operability of Space Station Freedom's meteoroid/debris protection system
NASA Technical Reports Server (NTRS)
Kahl, Maggie S.; Stokes, Jack W.
1992-01-01
The design of Space Station Freedom's external structure must not only protect the spacecraft from the hazardous environment, but also must be compatible with the extra vehicular activity system for assembly and maintenance. The external procedures for module support are utility connections, external orbital replaceable unit changeout, and maintenance of the meteoroid/debris shields and multilayer insulation. All of these interfaces require proper man-machine engineering to be compatible with the extra vehicular activity and manipulator systems. This paper discusses design solutions, including those provided for human interface, to the Space Station Freedom meteoroid/debris protection system. The system advantages and current access capabilities are illustrated through analysis of its configuration over the Space Station Freedom resource nodes and common modules, with emphasis on the cylindrical sections and endcones.
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
Robots with a gentle touch: advances in assistive robotics and prosthetics.
Harwin, W S
1999-01-01
As healthcare costs rise and an aging population makes an increased demand on services, so new techniques must be introduced to promote an individuals independence and provide these services. Robots can now be designed so they can alter their dynamic properties changing from stiff to flaccid, or from giving no resistance to movement, to damping any large and sudden movements. This has some strong implications in health care in particular for rehabilitation where a robot must work in conjunction with an individual, and might guiding or assist a persons arm movements, or might be commanded to perform some set of autonomous actions. This paper presents the state-of-the-art of rehabilitation robots with examples from prosthetics, aids for daily living and physiotherapy. In all these situations there is the potential for the interaction to be non-passive with a resulting potential for the human/machine/environment combination to become unstable. To understand this instability we must develop better models of the human motor system and fit these models with realistic parameters. This paper concludes with a discussion of this problem and overviews some human models that can be used to facilitate the design of the human/machine interfaces.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron
2001-01-01
NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1993-01-01
This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
NASA Astrophysics Data System (ADS)
Ramalingam, V. V.; Pandian, A.; Jaiswal, Abhijeet; Bhatia, Nikhar
2018-04-01
This paper presents a novel method based on concept of Machine Learning for Emotion Detection using various algorithms of Support Vector Machine and major emotions described are linked to the Word-Net for enhanced accuracy. The approach proposed plays a promising role to augment the Artificial Intelligence in the near future and could be vital in optimization of Human-Machine Interface.
Man-machine interfaces in health care
NASA Technical Reports Server (NTRS)
Charles, Steve; Williams, Roy E.
1991-01-01
The surgeon, like the pilot, is confronted with an ever increasing volume of voice, data, and image input. Simultaneously, the surgeon must control a rapidly growing number of devices to deliver care to the patient. The broad disciplines of man-machine interface design, systems integration, and teleoperation will play a role in the operating room of the future. The purpose of this communication is to report the incorporation of these design concepts into new surgical and laser delivery systems. A review of each general problem area and the systems under development to solve the problems are presented.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
A study of speech interfaces for the vehicle environment.
DOT National Transportation Integrated Search
2013-05-01
Over the past few years, there has been a shift in automotive human machine interfaces from : visual-manual interactions (pushing buttons and rotating knobs) to speech interaction. In terms of : distraction, the industry views speech interaction as a...
An All-Silk-Derived Dual-Mode E-skin for Simultaneous Temperature-Pressure Detection.
Wang, Chunya; Xia, Kailun; Zhang, Mingchao; Jian, Muqiang; Zhang, Yingying
2017-11-15
Flexible skin-mimicking electronics are highly desired for development of smart human-machine interfaces and wearable human-health monitors. Human skins are able to simultaneously detect different information, such as touch, friction, temperature, and humidity. However, due to the mutual interferences of sensors with different functions, it is still a big challenge to fabricate multifunctional electronic skins (E-skins). Herein, a combo temperature-pressure E-skin is reported through assembling a temperature sensor and a strain sensor in both of which flexible and transparent silk-nanofiber-derived carbon fiber membranes (SilkCFM) are used as the active material. The temperature sensor presents high temperature sensitivity of 0.81% per centigrade. The strain sensor shows an extremely high sensitivity with a gauge factor of ∼8350 at 50% strain, enabling the detection of subtle pressure stimuli that induce local strain. Importantly, the structure of the SilkCFM in each sensor is designed to be passive to other stimuli, enabling the integrated E-skin to precisely detect temperature and pressure at the same time. It is demonstrated that the E-skin can detect and distinguish exhaling, finger pressing, and spatial distribution of temperature and pressure, which cannot be realized using single mode sensors. The remarkable performance of the silk-based combo temperature-pressure sensor, together with its green and large-scalable fabrication process, promising its applications in human-machine interfaces and soft electronics.
A strain-absorbing design for tissue-machine interfaces using a tunable adhesive gel.
Lee, Sungwon; Inoue, Yusuke; Kim, Dongmin; Reuveny, Amir; Kuribara, Kazunori; Yokota, Tomoyuki; Reeder, Jonathan; Sekino, Masaki; Sekitani, Tsuyoshi; Abe, Yusuke; Someya, Takao
2014-12-19
To measure electrophysiological signals from the human body, it is essential to establish stable, gentle and nonallergic contacts between the targeted biological tissue and the electrical probes. However, it is difficult to form a stable interface between the two for long periods, especially when the surface of the biological tissue is wet and/or the tissue exhibits motion. Here we resolve this difficulty by designing and fabricating smart, stress-absorbing electronic devices that can adhere to wet and complex tissue surfaces and allow for reliable, long-term measurements of vital signals. We demonstrate a multielectrode array, which can be attached to the surface of a rat heart, resulting in good conformal contact for more than 3 h. Furthermore, we demonstrate arrays of highly sensitive, stretchable strain sensors using a similar design. Ultra-flexible electronics with enhanced adhesion to tissue could enable future applications in chronic in vivo monitoring of biological signals.
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants.
Sanchez, Justin C; Mahmoudi, Babak; DiGiovanna, Jack; Principe, Jose C
2009-04-01
The success of brain-machine interfaces (BMI) is enabled by the remarkable ability of the brain to incorporate the artificial neuroprosthetic 'tool' into its own cognitive space and use it as an extension of the user's body. Unlike other tools, neuroprosthetics create a shared space that seamlessly spans the user's internal goal representation of the world and the external physical environment enabling a much deeper human-tool symbiosis. A key factor in the transformation of 'simple tools' into 'intelligent tools' is the concept of co-adaptation where the tool becomes functionally involved in the extraction and definition of the user's goals. Recent advancements in the neuroscience and engineering of neuroprosthetics are providing a blueprint for how new co-adaptive designs based on reinforcement learning change the nature of a user's ability to accomplish tasks that were not possible using conventional methodologies. By designing adaptive controls and artificial intelligence into the neural interface, tools can become active assistants in goal-directed behavior and further enhance human performance in particular for the disabled population. This paper presents recent advances in computational and neural systems supporting the development of symbiotic neuroprosthetic assistants.
Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-01-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donald D Dudenhoeffer; Burce P Hallbert
Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less
NASA Technical Reports Server (NTRS)
Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)
1999-01-01
Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.
Development and Implementation of a Simplified Tool Measuring System
NASA Astrophysics Data System (ADS)
Chen, Jenn-Yih; Lee, Bean-Yin; Lee, Kuang-Chyi; Chen, Zhao-Kai
2010-01-01
This paper presents a simplified system for measuring geometric profiles of end mills. Firstly, a CCD camera was used to capture images of cutting tools. Then, an image acquisition card with the encoding function was adopted to convert the source of image into an USB port of a PC, and the image could be shown on a monitor. In addition, two linear scales were mounted on the X-Y table for positioning and measuring purposes. The signals of the linear scales were transmitted into a 4-axis quadrature encoder with 4-channel counter card for position monitoring. The C++ Builder was utilized for designing the user friendly human machine interface of the measuring system of tools. There is a cross line on the image of the interface to show a coordinate for the position measurement. Finally, a well-known tool measuring and inspection machine was employed for the measuring standard. This study compares the difference of the measuring results by using the machine and the proposed system. Experimental results show that the percentage of measuring error is acceptable for some geometric parameters of the square or ball nose end mills. Therefore, the results demonstrate the effectiveness of the presented approach.
Designing, Fabrication and Controlling Of Multipurpose3-DOF Robotic Arm
NASA Astrophysics Data System (ADS)
Nabeel, Hafiz Muhammad; Azher, Anum; Usman Ali, Syed M.; Wahab Mughal, Abdul
2013-12-01
In the present work, we have successfully designed and developed a 3-DOF articulated Robotic Arm capable of performing typical industrial tasks such as painting or spraying, assembling and handling automobiles parts and etc., in resemblance to a human arm. The mechanical assembly is designed on SOLIDWORKS and aluminum grade 6061 -T6 is used for its fabrication in order to reduce the structure weight. We have applied inverse kinematics to determine the joint angles, equations are fed into an efficient microcontroller ATMEGA16 which performs all the calculations to determine the joint angles on the basis of given coordinates to actuate the joints through motorized control. Good accuracy was obtained with quadrature optical encoders installed in each joint to achieve the desired position and a LabVIEW based GUI is designed to provide human machine interface.
Vibrotactile display for mobile applications based on dielectric elastomer stack actuators
NASA Astrophysics Data System (ADS)
Matysek, Marc; Lotz, Peter; Flittner, Klaus; Schlaak, Helmut F.
2010-04-01
Dielectric elastomer stack actuators (DESA) offer the possibility to build actuator arrays at very high density. The driving voltage can be defined by the film thickness, ranging from 80 μm down to 5 μm and driving field strength of 30 V/μm. In this paper we present the development of a vibrotactile display based on multilayer technology. The display is used to present several operating conditions of a machine in form of haptic information to a human finger. As an example the design of a mp3-player interface is introduced. To build up an intuitive and user friendly interface several aspects of human haptic perception have to be considered. Using the results of preliminary user tests the interface is designed and an appropriate actuator layout is derived. Controlling these actuators is important because there are many possibilities to present different information, e.g. by varying the driving parameters. A built demonstrator is used to verify the concept: a high recognition rate of more than 90% validates the concept. A characterization of mechanical and electrical parameters proofs the suitability of dielectric elastomer stack actuators for the use in mobile applications.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
Methodology for creating dedicated machine and algorithm on sunflower counting
NASA Astrophysics Data System (ADS)
Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand
2007-09-01
In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.
Avatars and virtual agents – relationship interfaces for the elderly
2017-01-01
In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725
Concurrent Image Processing Executive (CIPE)
NASA Technical Reports Server (NTRS)
Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1988-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.
Force reflecting hand controller
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A. (Inventor); Snow, Edward R. (Inventor); Townsend, William T. (Inventor)
1993-01-01
A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs.
Arnold, Andrew J; Razavieh, Ali; Nasr, Joseph R; Schulman, Daniel S; Eichfeld, Chad M; Das, Saptarshi
2017-03-28
Neurotransmitter release in chemical synapses is fundamental to diverse brain functions such as motor action, learning, cognition, emotion, perception, and consciousness. Moreover, improper functioning or abnormal release of neurotransmitter is associated with numerous neurological disorders such as epilepsy, sclerosis, schizophrenia, Alzheimer's disease, and Parkinson's disease. We have utilized hysteresis engineering in a back-gated MoS 2 field effect transistor (FET) in order to mimic such neurotransmitter release dynamics in chemical synapses. All three essential features, i.e., quantal, stochastic, and excitatory or inhibitory nature of neurotransmitter release, were accurately captured in our experimental demonstration. We also mimicked an important phenomenon called long-term potentiation (LTP), which forms the basis of human memory. Finally, we demonstrated how to engineer the LTP time by operating the MoS 2 FET in different regimes. Our findings could provide a critical component toward the design of next-generation smart and intelligent human-like machines and human-machine interfaces.
Application of the SCADA system in wastewater treatment plants.
Dieu, B
2001-01-01
The implementation of the SCADA system has a positive impact on the operations, maintenance, process improvement and savings for the City of Houston's Wastewater Operations branch. This paper will discuss the system's evolvement, the external/internal architecture, and the human-machine-interface graphical design. Finally, it will demonstrate the system's successes in monitoring the City's sewage and sludge collection/distribution systems, wet-weather facilities and wastewater treatment plants, complying with the USEPA requirements on the discharge, and effectively reducing the operations and maintenance costs.
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
The SmartHand transradial prosthesis
2011-01-01
Background Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. Methods SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. Results SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Conclusions Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies. PMID:21600048
Young, K L; Koppel, S; Charlton, J L
2017-09-01
Older adults are the fastest growing segment of the driving population. While there is a strong emphasis for older people to maintain their mobility, the safety of older drivers is a serious community concern. Frailty and declines in a range of age-related sensory, cognitive, and physical impairments can place older drivers at an increased risk of crash-related injuries and death. A number of studies have indicated that in-vehicle technologies such as Advanced Driver Assistance Systems (ADAS) and In-Vehicle Information Systems (IVIS) may provide assistance to older drivers. However, these technologies will only benefit older drivers if their design is congruent with the complex needs and diverse abilities of this driving cohort. The design of ADAS and IVIS is largely informed by automotive Human Machine Interface (HMI) guidelines. However, it is unclear to what extent the declining sensory, cognitive and physical capabilities of older drivers are addressed in the current guidelines. This paper provides a review of key current design guidelines for IVIS and ADAS with respect to the extent they address age-related changes in functional capacities. The review revealed that most of the HMI guidelines do not address design issues related to older driver impairments. In fact, in many guidelines driver age and sensory cognitive and physical impairments are not mentioned at all and where reference is made, it is typically very broad. Prescriptive advice on how to actually design a system so that it addresses the needs and limitations of older drivers is not provided. In order for older drivers to reap the full benefits that in-vehicle technology can afford, it is critical that further work establish how older driver limitations and capabilities can be supported by the system design process, including their inclusion into HMI design guidelines. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mobile Tactical HF/VHF/EW System for Ground Forces
1989-09-01
presen- tation of what I have learned . I would like to thank my advisor, Professor Robert Partelow, and co-advisor, Commander James R. Powell, for the...analyze newly developed systems to determine how the man- machine interfaces of such systems can best be designed for optimal use by the operators. B...terminals and other controls. If factors like luminance ratio, reflectance, glare illuminance are allowed for good man- machine interface then an effective
Man-machine analysis of translation and work tasks of Skylab films
NASA Technical Reports Server (NTRS)
Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.
1979-01-01
An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.
State of the art in nuclear telerobotics: focus on the man/machine connection
NASA Astrophysics Data System (ADS)
Greaves, Amna E.
1995-12-01
The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.
New generation emerging technologies for neurorehabilitation and motor assistance.
Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-12-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
Research on ARM Numerical Control System
NASA Astrophysics Data System (ADS)
Wei, Xu; JiHong, Chen
Computerized Numerical Control (CNC) machine tools is the foundation of modern manufacturing systems, whose advanced digital technology is the key to solve the problem of sustainable development of machine tool manufacturing industry. The paper is to design CNC system embedded on ARM and indicates the hardware design and the software systems supported. On the hardware side: the driving chip of the motor control unit, as the core of components, is MCX314AL of DSP motion control which is developed by NOVA Electronics Co., Ltd. of Japan. It make convenient to control machine because of its excellent performance, simple interface, easy programming. On the Software side, the uC/OS-2 is selected as the embedded operating system of the open source, which makes a detailed breakdown of the modules of the CNC system. Those priorities are designed according to their actual requirements. The ways of communication between the module and the interrupt response are so different that it guarantees real-time property and reliability of the numerical control system. Therefore, it not only meets the requirements of the current social precision machining, but has good man-machine interface and network support to facilitate a variety of craftsmen use.
Tattoolike Polyaniline Microparticle-Doped Gold Nanowire Patches as Highly Durable Wearable Sensors.
Gong, Shu; Lai, Daniel T H; Wang, Yan; Yap, Lim Wei; Si, Kae Jye; Shi, Qianqian; Jason, Naveen Noah; Sridhar, Tam; Uddin, Hemayet; Cheng, Wenlong
2015-09-09
Wearable and highly sensitive strain sensors are essential components of electronic skin for future biomonitoring and human machine interfaces. Here we report a low-cost yet efficient strategy to dope polyaniline microparticles into gold nanowire (AuNW) films, leading to 10 times enhancement in conductivity and ∼8 times improvement in sensitivity. Simultaneously, tattoolike wearable sensors could be fabricated simply by a direct "draw-on" strategy with a Chinese penbrush. The stretchability of the sensors could be enhanced from 99.7% to 149.6% by designing curved tattoo with different radius of curvatures. We also demonstrated roller coating method to encapusulate AuNWs sensors, exhibiting excellent water resistibility and durability. Because of improved conductivity of our sensors, they can directly interface with existing wireless circuitry, allowing for fabrication of wireless flexion sensors for a human finger-controlled robotic arm system.
Gesture-Controlled Interfaces for Self-Service Machines
NASA Technical Reports Server (NTRS)
Cohen, Charles J.; Beach, Glenn
2006-01-01
Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.
Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces
NASA Astrophysics Data System (ADS)
O'Connor, Timothy Francis, III
Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.
Three-dimensional virtual acoustic displays
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1991-01-01
The development of an alternative medium for displaying information in complex human-machine interfaces is described. The 3-D virtual acoustic display is a means for accurately transferring information to a human operator using the auditory modality; it combines directional and semantic characteristics to form naturalistic representations of dynamic objects and events in remotely sensed or simulated environments. Although the technology can stand alone, it is envisioned as a component of a larger multisensory environment and will no doubt find its greatest utility in that context. The general philosophy in the design of the display has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and later by technological capabilities or constraints. In expanding on this view, current and potential uses are addressed of virtual acoustic displays, such displays are characterized, and recent approaches to their implementation and application are reviewed, the research project at NASA-Ames is described in detail, and finally some critical research issues for the future are outlined.
O'Shea, Daniel J; Trautmann, Eric; Chandrasekaran, Chandramouli; Stavisky, Sergey; Kao, Jonathan C; Sahani, Maneesh; Ryu, Stephen; Deisseroth, Karl; Shenoy, Krishna V
2017-01-01
A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn
2018-01-01
Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.
Analysis of operational comfort in manual tasks using human force manipulability measure.
Tanaka, Yoshiyuki; Nishikawa, Kazuo; Yamada, Naoki; Tsuji, Toshio
2015-01-01
This paper proposes a scheme for human force manipulability (HFM) based on the use of isometric joint torque properties to simulate the spatial characteristics of human operation forces at an end-point of a limb with feasible magnitudes for a specified limb posture. This is also applied to the evaluation/prediction of operational comfort (OC) when manually operating a human-machine interface. The effectiveness of HFM is investigated through two experiments and computer simulations of humans generating forces by using their upper extremities. Operation force generation with maximum isometric effort can be roughly estimated with an HFM measure computed from information on the arm posture during a maintained posture. The layout of a human-machine interface is then discussed based on the results of operational experiments using an electric gear-shifting system originally developed for robotic devices. The results indicate a strong relationship between the spatial characteristics of the HFM and OC levels when shifting, and the OC is predicted by using a multiple regression model with HFM measures.
Toward a mathematical formalism of performance, task difficulty, and activation
NASA Technical Reports Server (NTRS)
Samaras, George M.
1988-01-01
The rudiments of a mathematical formalism for handling operational, physiological, and psychological concepts are developed for use by the man-machine system design engineer. The formalism provides a framework for developing a structured, systematic approach to the interface design problem, using existing mathematical tools, and simplifying the problem of telling a machine how to measure and use performance.
Cursor control by Kalman filter with a non-invasive body–machine interface
Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A
2015-01-01
Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561
2008-03-31
on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study
Software architecture for time-constrained machine vision applications
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.
2013-01-01
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
Lopes, Ana C; Nunes, Urbano
2009-01-01
This paper aims to present a new framework to train people with severe motor disabilities steering an assisted mobile robot (AMR), such as a powered wheelchair. Users with high level of motor disabilities are not able to use standard HMIs, which provide a continuous command signal (e. g. standard joystick). For this reason HMIs providing a small set of simple commands, which are sparse and discrete in time must be used (e. g. scanning interface, or brain computer interface), making very difficult to steer the AMR. In this sense, the assisted navigation training framework (ANTF) is designed to train users driving the AMR, in indoor structured environments, using this type of HMIs. Additionally it provides user characterization on steering the robot, which will later be used to adapt the AMR navigation system to human competence steering the AMR. A rule-based lens (RBL) model is used to characterize users on driving the AMR. Individual judgment performance choosing the best manoeuvres is modeled using a genetic-based policy capturing (GBPC) technique characterized to infer non-compensatory judgment strategies from human decision data. Three user models, at three different learning stages, using the RBL paradigm, are presented.
NASA Technical Reports Server (NTRS)
Malone, T. B.
1972-01-01
Requirements were determined analytically for the man machine interface for a teleoperator system performing on-orbit satellite retrieval and servicing. Requirements are basically of two types; mission/system requirements, and design requirements or design criteria. Two types of teleoperator systems were considered: a free flying vehicle, and a shuttle attached manipulator. No attempt was made to evaluate the relative effectiveness or efficiency of the two system concepts. The methodology used entailed an application of the Essex Man-Systems analysis technique as well as a complete familiarization with relevant work being performed at government agencies and by private industry.
ClearTK 2.0: Design Patterns for Machine Learning in UIMA
Bethard, Steven; Ogren, Philip; Becker, Lee
2014-01-01
ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework. PMID:29104966
ClearTK 2.0: Design Patterns for Machine Learning in UIMA.
Bethard, Steven; Ogren, Philip; Becker, Lee
2014-05-01
ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework.
Space Station Workstation Technology Workshop Report
NASA Technical Reports Server (NTRS)
Moe, K. L.; Emerson, C. M.; Eike, D. R.; Malone, T. B.
1985-01-01
This report describes the results of a workshop conducted at Goddard Space Flight Center (GSFC) to identify current and anticipated trends in human-computer interface technology that may influence the design or operation of a space station workstation. The workshop was attended by approximately 40 persons from government and academia who were selected for their expertise in some aspect of human-machine interaction research. The focus of the workshop was a 1 1/2 brainstorming/forecasting session in which the attendees were assigned to interdisciplinary working groups and instructed to develop predictions for each of the following technology areas: (1) user interface, (2) resource management, (3) control language, (4) data base systems, (5) automatic software development, (6) communications, (7) training, and (8) simulation. This report is significant in that it provides a unique perspective on workstation design for the space station. This perspective, which is characterized by a major emphasis on user requirements, should be most valuable to Phase B contractors involved in design development of the space station workstation. One of the more compelling results of the workshop is the recognition that no major technological breakthroughs are required to implement the current workstation concept. What is required is the creative application of existing knowledge and technology.
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces
NASA Technical Reports Server (NTRS)
Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)
1988-01-01
The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.
CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Komurlu, Caner; Blaha, Leslie M.
We developed CHISSL, a human-machine interface that utilizes supervised machine learning in an unsupervised context to help the user group unlabeled instances by her own mental model. The user primarily interacts via correction (moving a misplaced instance into its correct group) or confirmation (accepting that an instance is placed in its correct group). Concurrent with the user's interactions, CHISSL trains a classification model guided by the user's grouping of the data. It then predicts the group of unlabeled instances and arranges some of these alongside the instances manually organized by the user. We hypothesize that this mode of human andmore » machine collaboration is more effective than Active Learning, wherein the machine decides for itself which instances should be labeled by the user. We found supporting evidence for this hypothesis in a pilot study where we applied CHISSL to organize a collection of handwritten digits.« less
Designing Contestability: Interaction Design, Machine Learning, and Mental Health
Hirsch, Tad; Merced, Kritzia; Narayanan, Shrikanth; Imel, Zac E.; Atkins, David C.
2017-01-01
We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making ML systems more accountable to human actors. PMID:28890949
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Human factors issues for interstellar spacecraft
NASA Technical Reports Server (NTRS)
Cohen, Marc M.; Brody, Adam R.
1991-01-01
Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.
Physics-based and human-derived information fusion for analysts
NASA Astrophysics Data System (ADS)
Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael
2017-05-01
Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.
FSW of Aluminum Tailor Welded Blanks across Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hovanski, Yuri; Upadhyay, Piyush; Carlson, Blair
2015-02-16
Development and characterization of friction stir welded aluminum tailor welded blanks was successfully carried out on three separate machine platforms. Each was a commercially available, gantry style, multi-axis machine designed specifically for friction stir welding. Weld parameters were developed to support high volume production of dissimilar thickness aluminum tailor welded blanks at speeds of 3 m/min and greater. Parameters originally developed on an ultra-high stiffness servo driven machine where first transferred to a high stiffness servo-hydraulic friction stir welding machine, and subsequently transferred to a purpose built machine designed to accommodate thin sheet aluminum welding. The inherent beam stiffness, bearingmore » compliance, and control system for each machine were distinctly unique, which posed specific challenges in transferring welding parameters across machine platforms. This work documents the challenges imposed by successfully transferring weld parameters from machine to machine, produced from different manufacturers and with unique control systems and interfaces.« less
1985-11-01
the group to be alert to changes in goals, noting that if the model is not sensitive to goal changes , it will lack validity. Mr. Hartzell announced...This increased emphasis on the soldier-machine interface has not been a sudden change . Instead it has been a gradual one coincident with and...point alone in affecting both design changes and operational doctrine for the system. Analysis of these data should first compare achieved
The JPL telerobot operator control station. Part 2: Software
NASA Technical Reports Server (NTRS)
Kan, Edwin P.; Landell, B. Patrick; Oxenberg, Sheldon; Morimoto, Carl
1989-01-01
The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The software design of the operator control system is discussed.
KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.
The remapping of space in motor learning and human-machine interfaces
Mussa-Ivaldi, F.A.; Danziger, Z.
2009-01-01
Studies of motor adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. One of the most fundamental elements of our environment is space itself. This article focuses on the notion of Euclidean space as it applies to common sensory motor experiences. Starting from the assumption that we interact with the world through a system of neural signals, we observe that these signals are not inherently endowed with metric properties of the ordinary Euclidean space. The ability of the nervous system to represent these properties depends on adaptive mechanisms that reconstruct the Euclidean metric from signals that are not Euclidean. Gaining access to these mechanisms will reveal the process by which the nervous system handles novel sophisticated coordinate transformation tasks, thus highlighting possible avenues to create functional human-machine interfaces that can make that task much easier. A set of experiments is presented that demonstrate the ability of the sensory-motor system to reorganize coordination in novel geometrical environments. In these environments multiple degrees of freedom of body motions are used to control the coordinates of a point in a two-dimensional Euclidean space. We discuss how practice leads to the acquisition of the metric properties of the controlled space. Methods of machine learning based on the reduction of reaching errors are tested as a means to facilitate learning by adaptively changing he map from body motions to controlled device. We discuss the relevance of the results to the development of adaptive human machine interfaces and optimal control. PMID:19665553
A vibro-haptic human-machine interface for structural health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mascarenas, David; Plont, Crystal; Brown, Christina
The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less
A vibro-haptic human-machine interface for structural health monitoring
Mascarenas, David; Plont, Crystal; Brown, Christina; ...
2014-11-01
The structural health monitoring (SHM) community’s goal has been to endow physical systems with a nervous system not unlike those commonly found in living organisms. Typically the SHM community has attempted to do this by instrumenting structures with a variety of sensors, and then applying various signal processing and classification procedures to the data in order to detect the presence of damage, the location of damage, the severity of damage, and to estimate the remaining useful life of the structure. This procedure has had some success, but we are still a long way from achieving the performance of nervous systemsmore » found in biology. This is primarily because contemporary classification algorithms do not have the performance required. In many cases expert judgment is superior to automated classification. This work introduces a new paradigm. We propose interfacing the human nervous system to the distributed sensor network located on the structure and developing new techniques to enable human-machine cooperation. Results from the field of sensory substitution suggest this should be possible. This study investigates a vibro-haptic human-machine interface for SHM. The investigation was performed using a surrogate three-story structure. The structure features three nonlinearity-inducing bumpers to simulate damage. Accelerometers are placed on each floor to measure the response of the structure to a harmonic base excitation. The accelerometer measurements are preprocessed. As a result, the preprocessed data is then encoded encoded as a vibro-tactile stimulus. Human subjects were then subjected to the vibro-tactile stimulus and asked to characterize the damage in the structure.« less
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
A Cognitive Systems Engineering Approach to Developing HMI Requirements for New Technologies
NASA Technical Reports Server (NTRS)
Fern, Lisa Carolynn
2016-01-01
This document examines the challenges inherent in designing and regulating to support human-automation interaction for new technologies that will deployed into complex systems. A key question for new technologies, is how work will be accomplished by the human and machine agents. This question has traditionally been framed as how functions should be allocated between humans and machines. Such framing misses the coordination and synchronization that is needed for the different human and machine roles in the system to accomplish their goals. Coordination and synchronization demands are driven by the underlying human-automation architecture of the new technology, which are typically not specified explicitly by the designers. The human machine interface (HMI) which is intended to facilitate human-machine interaction and cooperation, however, typically is defined explicitly and therefore serves as a proxy for human-automation cooperation requirements with respect to technical standards for technologies. Unfortunately, mismatches between the HMI and the coordination and synchronization demands of the underlying human-automation architecture, can lead to system breakdowns. A methodology is needed that both designers and regulators can utilize to evaluate the expected performance of a new technology given potential human-automation architectures. Three experiments were conducted to inform the minimum HMI requirements a detect and avoid system for unmanned aircraft systems (UAS). The results of the experiments provided empirical input to specific minimum operational performance standards that UAS manufacturers will have to meet in order to operate UAS in the National Airspace System (NAS). These studies represent a success story for how to objectively and systematically evaluate prototype technologies as part of the process for developing regulatory requirements. They also provide an opportunity to reflect on the lessons learned from a recent research effort in order to improve the methodology for defining technology requirements for regulators in the future. The biggest shortcoming of the presented research program was the absence of the explicit definition, generation and analysis of potential human-automation architectures. Failure to execute this step in the research process resulted in less efficient evaluation of the candidate prototypes technologies in addition to the complete absence of different approaches to human-automation cooperation. For example, all of the prototype technologies that were evaluated in the research program assumed a human-automation architecture that relied on serial processing from the automation to the human. While this type of human-automation architecture is typical across many different technologies and in many different domains, it ignores different architectures where humans and automation work in parallel. Defining potential human-automation architectures a priori also allows regulators to develop scenarios that will stress the performance boundaries of the technology during the evaluation phase. The importance of adding this step of generating and evaluating candidate human-automation architectures prior to formal empirical evaluation is discussed.
Development of the FITS tools package for multiple software environments
NASA Technical Reports Server (NTRS)
Pence, W. D.; Blackburn, J. K.
1992-01-01
The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.
A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom
NASA Astrophysics Data System (ADS)
Pan, Lizhi; Yang, Zhen; Zhang, Dingguo
2015-10-01
The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.
A structurally decoupled mechanism for measuring wrist torque in three degrees of freedom.
Pan, Lizhi; Yang, Zhen; Zhang, Dingguo
2015-10-01
The wrist joint is a critical part of the human body for movement. Measuring the torque of the wrist with three degrees of freedom (DOFs) is important in some fields, including rehabilitation, biomechanics, ergonomics, and human-machine interfacing. However, the particular structure of the wrist joint makes it difficult to measure the torque in all three directions simultaneously. This work develops a structurally decoupled instrument for measuring and improving the measurement accuracy of 3-DOF wrist torque during isometric contraction. Three single-axis torque sensors were embedded in a customized mechanical structure. The dimensions and components of the instrument were designed based on requirement of manufacturability. A prototype of the instrument was machined, assembled, integrated, and tested. The results show that the structurally decoupled mechanism is feasible for acquiring wrist torque data in three directions either independently or simultaneously. As a case study, we use the device to measure wrist torques concurrently with electromyography signal acquisition in preparation for simultaneous and proportional myoelectric control of prostheses.
NASA Astrophysics Data System (ADS)
Gorbunova, T. N.; Koltunov, I. I.; Tumanova, M. B.
2018-05-01
The article is devoted to the development of a model and control program for a 3D printer working based on extrusion technology. The article contains descriptions of all components of the machine and blocks of the interface of the control program.
NASA Technical Reports Server (NTRS)
Torosyan, David
2012-01-01
Just as important as the engineering that goes into building a robot is the method of interaction, or how human users will use the machine. As part of the Human-System Interactions group (Conductor) at JPL, I explored using a web interface to interact with ATHLETE, a prototype lunar rover. I investigated the usefulness of HTML 5 and Javascript as a telemetry viewer as well as the feasibility of having a rover communicate with a web server. To test my ideas I built a mobile-compatible website and designed primarily for an Android tablet. The website took input from ATHLETE engineers, and upon its completion I conducted a user test to assess its effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satogata, Todd
2013-04-22
The integrated control system (ICS) is responsible for the whole ESS machine and facility: accelerator, target, neutron scattering instruments and conventional facilities. This unified approach keeps the costs of development, maintenance and support relatively low. ESS has selected a standardised, field-proven controls framework, the Experimental Physics and Industrial Control System (EPICS), which was originally developed jointly by Argonne and Los Alamos National Laboratories. Complementing this selection are best practices and experience from similar facilities regarding platform standardisation, control system development and device integration and commissioning. The components of ICS include the control system core, the control boxes, the BLED databasemore » management system, and the human machine interface. The control system core is a set of systems and tools that make it possible for the control system to provide required data, information and services to engineers, operators, physicists and the facility itself. The core components are the timing system that makes possible clock synchronisation across the facility, the machine protection system (MPS) and the personnel protection system (PPS) that prevent damage to the machine and personnel, and a set of control system services. Control boxes are servers that control a collection of equipment (for example a radio frequency cavity). The integrated control system will include many control boxes that can be assigned to one supplier, such as an internal team, a collaborating institute or a commercial vendor. This approach facilitates a clear division of responsibilities and makes integration much easier. A control box is composed of a standardised hardware platform, components, development tools and services. On the top level, it interfaces with the core control system components (timing, MPS, PPS) and with the human-machine interface. At the bottom, it interfaces with the equipment and parts of the facility through a set of analog and digital signals, real-time control loops and other communication buses. The ICS central data management system is named BLED (beam line element databases). BLED is a set of databases, tools and services that is used to store, manage and access data. It holds vital control system configuration and physics-related (lattice) information about the accelerator, target and instruments. It facilitates control system configuration by bringing together direct input-output controller (IOC) con guration and real-time data from proton and neutron beam line models. BLED also simplifies development and speeds up the code-test-debug cycle. The set of tools that access BLED will be tailored to the needs of different categories of users, such as ESS staff physicists, engineers, and operators; external partner laboratories; and visiting experimental instrument users. The human-machine interface is vital to providing a high-quality experience to ICS users. It encompasses a wide array of devices and software tools, from control room screens to engineer terminal windows; from beam physics data tools to post-mortem data analysis tools. It serves users with a wide range of skills from widely varied backgrounds. The Controls Group is developing a set of user profiles to accommodate this diverse range of use-cases and users.« less
Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.
Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
Digital Systems Validation Handbook. Volume 2. Chapter 19. Pilot - Vehicle Interface
1993-11-01
checklists, and other status messages. Voice interactive systems are defi-ed as "the interface between a cooperative human and a machine, which involv -he...Pilot-Vehicle Interface 19-85 5.6.1 Crew Interaction and the Cockpit 19-85 5.6.2 Crew Resource Management and Safety 19-87 5.6.3 Pilot and Crew Training...systems was a "stand-alone" component performing its intended function. Systems and their cockpit interfaces were added as technological advances were
Human-like machines: Transparency and comprehensibility.
Patrzyk, Piotr M; Link, Daniela; Marewski, Julian N
2017-01-01
Artificial intelligence algorithms seek inspiration from human cognitive systems in areas where humans outperform machines. But on what level should algorithms try to approximate human cognition? We argue that human-like machines should be designed to make decisions in transparent and comprehensible ways, which can be achieved by accurately mirroring human cognitive processes.
Roh, Eun; Hwang, Byeong-Ung; Kim, Doil; Kim, Bo-Yeong; Lee, Nae-Eung
2015-06-23
Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.
Man-machine interface analysis of the flight design system
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1978-01-01
The objective of the current effort was to perform a broad analysis of the human factors issues involved in the design of the Flight Design System (FDS). The analysis was intended to include characteristics of the system itself, such as: (1) basic structure and functional capabilities of FDS; (2) user backgrounds, capabilities, and possible modes of use; (3) FDS interactive dialogue, problem solving aids; (4) system data management capabilities; and to include, as well, such system related matters as: (1) flight design team structure; (2) roles of technicians; (3) user training; and (4) methods of evaluating system performance. Wherever possible, specific recommendations are made. In other cases, the issues which seem most important are identified. In some cases, additional analyses or experiments which might provide resolution are suggested.
The future of the provision process for mobility assistive technology: a survey of providers.
Dicianno, Brad E; Joseph, James; Eckstein, Stacy; Zigler, Christina K; Quinby, Eleanor J; Schmeler, Mark R; Schein, Richard M; Pearlman, Jon; Cooper, Rory A
2018-03-20
The purpose of this study was to evaluate the opinions of providers of mobility assistive technologies to help inform a research agenda and set priorities. This survey study was anonymous and gathered opinions of individuals who participate in the process to provide wheelchairs and other assistive technologies to clients. Participants were asked to rank the importance of developing various technologies and rank items against each other in terms of order of importance. Participants were also asked to respond to several open-ended questions or statements. A total of 161 providers from 35 states within the USA consented to participation and completed the survey. This survey revealed themes of advanced wheelchair design, assistive robotics and intelligent systems, human machine interfaces and smart device applications. It also outlined priorities for researchers to provide continuing education to clients and providers. These themes will be used to develop research and development priorities. Implications for Rehabilitation • Research in advanced wheelchair design is needed to facilitate travel and environmental access with wheelchairs and to develop alternative power sources for wheelchairs.• New assistive robotics and intelligent systems are needed to help wheelchairs overcome obstacles or self-adjust, assist wheelchair navigation in the community, assist caregivers and transfers, and aid ambulation.• Innovations in human machine interfaces may help advance the control of mobility devices and robots with the brain, eye movements, facial gesture recognition or other systems.• Development of new smart devices is needed for better control of the environment, monitoring activity and promoting healthy behaviours.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Effects of checklist interface on non-verbal crew communications
NASA Technical Reports Server (NTRS)
Segal, Leon D.
1994-01-01
The investigation looked at the effects of the spatial layout and functionality of cockpit displays and controls on crew communication. Specifically, the study focused on the intra-cockpit crew interaction, and subsequent task performance, of airline pilots flying different configurations of a new electronic checklist, designed and tested in a high-fidelity simulator at NASA Ames Research Center. The first part of this proposal establishes the theoretical background for the assumptions underlying the research, suggesting that in the context of the interaction between a multi-operator crew and a machine, the design and configuration of the interface will affect interactions between individual operators and the machine, and subsequently, the interaction between operators. In view of the latest trends in cockpit interface design and flight-deck technology, in particular, the centralization of displays and controls, the introduction identifies certain problems associated with these modern designs and suggests specific design issues to which the expected results could be applied. A detailed research program and methodology is outlined and the results are described and discussed. Overall, differences in cockpit design were shown to impact the activity within the cockpit, including interactions between pilots and aircraft and the cooperative interactions between pilots.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
Charting the energy landscape of metal/organic interfaces via machine learning
NASA Astrophysics Data System (ADS)
Scherbela, Michael; Hörmann, Lukas; Jeindl, Andreas; Obersteiner, Veronika; Hofmann, Oliver T.
2018-04-01
The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. In this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. We demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.
Charting the energy landscape of metal/organic interfaces via machine learning
Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas; ...
2018-04-17
The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.
Charting the energy landscape of metal/organic interfaces via machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherbela, Michael; Hormann, Lukas; Jeindl, Andreas
The rich polymorphism exhibited by inorganic/organic interfaces is a major challenge for materials design. Here in this work, we present a method to efficiently explore the potential energy surface and predict the formation energies of polymorphs and defects. This is achieved by training a machine learning model on a list of only 100 candidate structures that are evaluated via dispersion-corrected density functional theory (DFT) calculations. Finally, we demonstrate the power of this approach for tetracyanoethylene on Ag(100) and explain the anisotropic ordering that is observed experimentally.
Intelligible machine learning with malibu.
Langlois, Robert E; Lu, Hui
2008-01-01
malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.
Scanning Rocket Impact Area with an UAV: First Results
NASA Astrophysics Data System (ADS)
Santos, C. C. C.; Costa, D. A. L. M.; Junior, V. L. S.; Silva, B. R. F.; Leite, D. L.; Junor, C. E. B. S.; Liberator, B. A.; Nogueira, M. B.; Senna, M. D.; Santiago, G. S.; Dantas, J. B. D.; Alsina, P. J.; Albuquerque, G. L. A.
2015-09-01
This paper presents the first subsystems developed for an UAV used in safety procedures of sounding rockets campaigns. The aim of this UAV is to scan the rocket impact area in order to search for unexpected boats. To achieve this mission, designers developed an image recognition algorithm, two human-machine interfaces and two communication links, one to control the drone and the other for receiving telemetry data. In this paper, developers take all major engineering decisions in order to overcome the project constraints. A secondary goal of the project is to encourage young people to take part in Brazilian space program. For this reason, most of designers are undergraduate students under supervision of experts.
Cloud-based robot remote control system for smart factory
NASA Astrophysics Data System (ADS)
Wu, Zhiming; Li, Lianzhong; Xu, Yang; Zhai, Jingmei
2015-12-01
With the development of internet technologies and the wide application of robots, there is a prospect (trend/tendency) of integration between network and robots. A cloud-based robot remote control system over networks for smart factory is proposed, which enables remote users to control robots and then realize intelligent production. To achieve it, a three-layer system architecture is designed including user layer, service layer and physical layer. Remote control applications running on the cloud server is developed on Microsoft Azure. Moreover, DIV+ CSS technologies are used to design human-machine interface to lower maintenance cost and improve development efficiency. Finally, an experiment is implemented to verify the feasibility of the program.
Automotive HMI design and participatory user involvement: review and perspectives.
François, Mathilde; Osiurak, François; Fort, Alexandra; Crave, Philippe; Navarro, Jordan
2017-04-01
Automotive human-machine interface (HMI) design is facing new challenges due to the technological advances of the last decades. The design process has to be adapted in order to address human factors and road safety challenges. It is now widely accepted that user involvement in the HMI design process is valuable. However, the current form of user involvement in industry remains at the stages of concept assessment and usability tests. Moreover, the literature in other fields (e.g. information systems) promotes a broader user involvement with participatory design (i.e. the user is fully involved in the development process). This article reviews the established benefits of participatory design and reveals perspectives for automotive HMI quality improvement in a cognitive ergonomic framework. Practitioner Summary: Automotive HMI quality determines, in part, drivers' ability to perform primary driving tasks while using in-vehicle devices. User involvement in the design process is a key point to contribute to HMI quality. This article reports the potential benefits of a broad involvement from drivers to meet automotive HMI design challenges.
Man-machine interface issues in space telerobotics: A JPL research and development program
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1987-01-01
Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.
Micro-patterned graphene-based sensing skins for human physiological monitoring
NASA Astrophysics Data System (ADS)
Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik
2018-03-01
Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.
Hands-free human-machine interaction with voice
NASA Astrophysics Data System (ADS)
Juang, B. H.
2004-05-01
Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.
Integrated intelligent sensor for the textile industry
NASA Astrophysics Data System (ADS)
Peltie, Philippe; David, Dominique
1996-08-01
A new sensor has been developed for pantyhose inspection. Unlike a first complete inspection machine devoted to post- manufacturing control of the whole panty, this sensor will be directly integrated on currently existing manufacturing machines, and will combine advantages of miniaturization is to design an intelligent, compact and very cheap product, which should be integrated without requiring any modifications of host machines. The sensor part was designed to achieve closed acquisition, and various solutions have been explored to maintain adequate depth of field. The illumination source will be integrated in the device. The processing part will include correction facilities and electronic processing. Finally, high-level information will be output in order to interface directly with the manufacturing machine automate.
A Wireless 32-Channel Implantable Bidirectional Brain Machine Interface
Su, Yi; Routhu, Sudhamayee; Moon, Kee S.; Lee, Sung Q.; Youm, WooSub; Ozturk, Yusuf
2016-01-01
All neural information systems (NIS) rely on sensing neural activity to supply commands and control signals for computers, machines and a variety of prosthetic devices. Invasive systems achieve a high signal-to-noise ratio (SNR) by eliminating the volume conduction problems caused by tissue and bone. An implantable brain machine interface (BMI) using intracortical electrodes provides excellent detection of a broad range of frequency oscillatory activities through the placement of a sensor in direct contact with cortex. This paper introduces a compact-sized implantable wireless 32-channel bidirectional brain machine interface (BBMI) to be used with freely-moving primates. The system is designed to monitor brain sensorimotor rhythms and present current stimuli with a configurable duration, frequency and amplitude in real time to the brain based on the brain activity report. The battery is charged via a novel ultrasonic wireless power delivery module developed for efficient delivery of power into a deeply-implanted system. The system was successfully tested through bench tests and in vivo tests on a behaving primate to record the local field potential (LFP) oscillation and stimulate the target area at the same time. PMID:27669264
A new six-degree-of-freedom force-reflecting hand controller for space telerobotics
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas; Snow, Edward; Townsend, William; Robinson, Lee; Hanson, Joe
1990-01-01
A new 6 degree of freedom universal Force Reflecting Hand Controller (FRHC) was designed for use as the man-machine interface in teleoperated and telerobotic flight systems. The features of this new design include highly intuitive operation, excellent kinesthetic feedback, high fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all 6 DOF, good back-drivability, and zero backlash. In addition, the new design has a much larger work envelope, smaller stowage volume, greater stiffness and responsiveness, and better overlap of the human operator's range of motion than do previous designs. The utility and basic operation of a new, flight prototype FRHC called the Model X is briefly discussed. The design heritage, general design goals, and design implementation of this advanced new generation of FRHCs are presented, followed by a discussion of basic features and the results of initial testing.
Concept Design of the Payload Handling Manipulator System. [space shuttle orbiters
NASA Technical Reports Server (NTRS)
1975-01-01
The design, requirements, and interface definition of a remote manipulator system developed to handle orbiter payloads are presented. End effector design, control system concepts, and man-machine engineering are considered along with crew station requirements and closed circuit television system performance requirements.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1991-01-01
Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.
Gold, Christian; Körber, Moritz; Lechner, David; Bengler, Klaus
2016-06-01
The aim of this study was to quantify the impact of traffic density and verbal tasks on takeover performance in highly automated driving. In highly automated vehicles, the driver has to occasionally take over vehicle control when approaching system limits. To ensure safety, the ability of the driver to regain control of the driving task under various driving situations and different driver states needs to be quantified. Seventy-two participants experienced takeover situations requiring an evasive maneuver on a three-lane highway with varying traffic density (zero, 10, and 20 vehicles per kilometer). In a between-subjects design, half of the participants were engaged in a verbal 20-Questions Task, representing speaking on the phone while driving in a highly automated vehicle. The presence of traffic in takeover situations led to longer takeover times and worse takeover quality in the form of shorter time to collision and more collisions. The 20-Questions Task did not influence takeover time but seemed to have minor effects on the takeover quality. For the design and evaluation of human-machine interaction in takeover situations of highly automated vehicles, the traffic state seems to play a major role, compared to the driver state, manipulated by the 20-Questions Task. The present results can be used by developers of highly automated systems to appropriately design human-machine interfaces and to assess the driver's time budget for regaining control. © 2016, Human Factors and Ergonomics Society.
Quadcopter control using a BCI
NASA Astrophysics Data System (ADS)
Rosca, S.; Leba, M.; Ionica, A.; Gamulescu, O.
2018-01-01
The paper presents how there can be interconnected two ubiquitous elements nowadays. On one hand, the drones, which are increasingly present and integrated into more and more fields of activity, beyond the military applications they come from, moving towards entertainment, real-estate, delivery and so on. On the other hand, unconventional man-machine interfaces, which are generous topics to explore now and in the future. Of these, we chose brain computer interface (BCI), which allows human-machine interaction without requiring any moving elements. The research consists of mathematical modeling and numerical simulation of a drone and a BCI. Then there is presented an application using a Parrot mini-drone and an Emotiv Insight BCI.
Techno-Human Mesh: The Growing Power of Information Technologies.
ERIC Educational Resources Information Center
West, Cynthia K.
This book examines the intersection of information technologies, power, people, and bodies. It explores how information technologies are on a path of creating efficiency, productivity, profitability, surveillance, and control, and looks at the ways in which human-machine interface technologies, such as wearable computers, biometric technologies,…
On the applicability of brain reading for predictive human-machine interfaces in robotics.
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125
A voyage to Mars: A challenge to collaboration between man and machines
NASA Technical Reports Server (NTRS)
Statler, Irving C.
1991-01-01
A speech addressing the design of man machine systems for exploration of space beyond Earth orbit from the human factors perspective is presented. Concerns relative to the design of automated and intelligent systems for the NASA Space Exploration Initiative (SEI) missions are largely based on experiences with integrating humans and comparable systems in aviation. The history, present status, and future prospect, of human factors in machine design are discussed in relation to a manned voyage to Mars. Three different cases for design philosophy are presented. The use of simulation is discussed. Recommendations for required research are given.
Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review
Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik
2017-01-01
For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122
Rojas, Mario; Ponce, Pedro; Molina, Arturo
2016-08-01
This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.
Complete scanpaths analysis toolbox.
Augustyniak, Piotr; Mikrut, Zbigniew
2006-01-01
This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Design of a 32-Channel EEG System for Brain Control Interface Applications
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design. PMID:22778545
Design of a 32-channel EEG system for brain control interface applications.
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Development of sensitized pick coal interface detector system
NASA Technical Reports Server (NTRS)
Burchill, R. F.
1979-01-01
One approach for detection of the coal interface is measurement of the pick cutting hoads and shock through the use of pick strain gage load cells and accelerometers. The cutting drum of a long wall mining machine contains a number of cutting picks. In order to measure pick loads and shocks, one pick was instrumented and telementry used to transmit the signals from the drum to an instrument-type tape recorder. A data system using FM telemetry was designed to transfer cutting bit load and shock information from the drum of a longwall shearer coal mining machine to a chassis mounted data recorder.
Seating Considerations for Spaceflight: The Human to Machine Interface
NASA Technical Reports Server (NTRS)
Gohmert, Dustin M.
2011-01-01
Seating is one of the most critical components to be considered during design of a spacecraft. Since seats are the final interface between the occupant and the vehicle wherein all launch and landing operations are performed, significant effort must be spent to ensure proper integration of the human to the spacecraft. The importance of seating can be divided into two categories: seat layout and seat design. The layout of the seats drives the overall cabin configuration - from displays and controls, to windows, to stowage, to egress paths. Since the layout of the seats is such a critical design parameter within the crew compartment, it is one of the first design challenges that must be completed in the critical path of the spacecraft design. In consideration of seat layout in the vehicle, it is important for the designers to account for often intangible factors such as safety, operability, contingency performance, crew rescue. Seat layout will lead to definition of the quantity, shape, and posture of the seats. The seats of the craft must restrain and protect the occupant in all seated phases of flight, while allowing for nominal mission performance. In design of a spacecraft seat, the general posture of the occupant and the landing loads to be encountered are the greatest drivers of overall design. Variances, such as upright versus recumbent postures will dictate fit of the seat to the occupant and drive the total envelope of the seat around the occupant. Seat design revolves around applying sound principles of seated occupant protection coupled with the unique environments driven by the seat layout, landing loads, and operational and emergency scenarios.
Triboelectrification based motion sensor for human-machine interfacing.
Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin
2014-05-28
We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.
Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas
2013-01-01
Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Adding Pluggable and Personalized Natural Control Capabilities to Existing Applications
Lamberti, Fabrizio; Sanna, Andrea; Carlevaris, Gilles; Demartini, Claudio
2015-01-01
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities. PMID:25635410
Theoretical design of near - infrared organic compounds
NASA Astrophysics Data System (ADS)
Brymora, Katarzyna; Ducasse, Laurent; Dautel, Olivier; Lartigau-Dagron, Christine; Castet, FréDéRic
The world follows the path of digital development faster than ever before. In consequence, the Human Machine Interfaces (HMI) market is growing as well and it requires some innovations. The goal of our work is to achieve an organic Infra-Red (IR) photodetectors hitting the performance requirements for HMI applications. The quantum chemical calculations are used to guide the synthesis and technology development. In this work, in the framework of density functional theory (DFT) and time-dependent density functional theory (TD-DFT), we consider a large variety of materials exploring small donor-acceptor-donor molecules and copolymers alternating donor and acceptor monomers. We provide a structure-property relationship in view of designing new Near-Infrared (NIR) absorbing organic molecules and polymers.
Adding pluggable and personalized natural control capabilities to existing applications.
Lamberti, Fabrizio; Sanna, Andrea; Carlevaris, Gilles; Demartini, Claudio
2015-01-28
Advancements in input device and sensor technologies led to the evolution of the traditional human-machine interaction paradigm based on the mouse and keyboard. Touch-, gesture- and voice-based interfaces are integrated today in a variety of applications running on consumer devices (e.g., gaming consoles and smartphones). However, to allow existing applications running on desktop computers to utilize natural interaction, significant re-design and re-coding efforts may be required. In this paper, a framework designed to transparently add multi-modal interaction capabilities to applications to which users are accustomed is presented. Experimental observations confirmed the effectiveness of the proposed framework and led to a classification of those applications that could benefit more from the availability of natural interaction modalities.
Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design.
Aromaa, Susanna; Väänänen, Kaisa
2016-09-01
In recent years, the use of virtual prototyping has increased in product development processes, especially in the assessment of complex systems targeted at end-users. The purpose of this study was to evaluate the suitability of virtual prototyping to support human factors/ergonomics evaluation (HFE) during the design phase. Two different virtual prototypes were used: augmented reality (AR) and virtual environment (VE) prototypes of a maintenance platform of a rock crushing machine. Nineteen designers and other stakeholders were asked to assess the suitability of the prototype for HFE evaluation. Results indicate that the system model characteristics and user interface affect the experienced suitability. The VE system was valued as being more suitable to support the assessment of visibility, reach, and the use of tools than the AR system. The findings of this study can be used as a guidance for the implementing virtual prototypes in the product development process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Man/Machine Interaction Dynamics And Performance (MMIDAP) capability
NASA Technical Reports Server (NTRS)
Frisch, Harold P.
1991-01-01
The creation of an ability to study interaction dynamics between a machine and its human operator can be approached from a myriad of directions. The Man/Machine Interaction Dynamics and Performance (MMIDAP) project seeks to create an ability to study the consequences of machine design alternatives relative to the performance of both machine and operator. The class of machines to which this study is directed includes those that require the intelligent physical exertions of a human operator. While Goddard's Flight Telerobotic's program was expected to be a major user, basic engineering design and biomedical applications reach far beyond telerobotics. Ongoing efforts are outlined of the GSFC and its University and small business collaborators to integrate both human performance and musculoskeletal data bases with analysis capabilities necessary to enable the study of dynamic actions, reactions, and performance of coupled machine/operator systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...
Code of Federal Regulations, 2011 CFR
2011-10-01
... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...
Code of Federal Regulations, 2013 CFR
2013-10-01
... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...
Code of Federal Regulations, 2014 CFR
2014-10-01
... physical relationships in the subsystem or system; (2) A description of the railroad operation or... requirements; (5) A preliminary human factors analysis, including a complete description of all human-machine interfaces and the impact of interoperability requirements on the same; (6) An analysis of the applicability...
NASA Technical Reports Server (NTRS)
1981-01-01
The impact of modern technology on the role, responsibility, authority, and performance of human operators in modern aircraft and ATC systems was examined in terms of principles defined by Paul Fitts. Research into human factors in aircraft operations and the use of human factors engineering for aircraft safety improvements were discussed, and features of the man-machine interface in computerized cockpit warning systems are examined. The design and operational features of computerized avionics displays and HUDs are described, along with results of investigations into pilot decision-making behavior, aircrew procedural compliance, and aircrew judgment training programs. Experiments in vision and visual perception are detailed, as are behavioral studies of crew workload, coordination, and complement. The effectiveness of pilot selection, screening, and training techniques are assessed, as are methods for evaluating pilot performance.
NASA Technical Reports Server (NTRS)
Johnson, David W.
1992-01-01
Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
Steering a Tractor by Means of an EMG-Based Human-Machine Interface
Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio
2011-01-01
An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering. PMID:22164006
Steering a tractor by means of an EMG-based human-machine interface.
Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio
2011-01-01
An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver's scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering.
Lin, Yi-Jung; Speedie, Stuart
2003-01-01
User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419
Enhanced operator interface for hand-held landmine detector
NASA Astrophysics Data System (ADS)
Herman, Herman; McMahill, Jeffrey D.; Kantor, George
2001-10-01
As landmines get harder to detect, the complexity of landmine detectors has also been increasing. To increase the probability of detection and decrease the false alarm rate of low metallic landmines, many detectors employ multiple sensing modalities, which include radar and metal detector. Unfortunately, the operator interface for these new detectors stays pretty much the same as for the older detectors. Although the amount of information that the new detectors acquire has increased significantly, the interface has been limited to a simple audio interface. We are currently developing a hybrid audiovisual interface for enhancing the overall performance of the detector. The hybrid audiovisual interface combines the simplicity of the audio output with the rich spatial content of the video display. It is designed to optimally present the output of the detector and also to give the proper feedback to the operator. Instead of presenting all the data to the operator simultaneously, the interface allows the operator to access the information as needed. This capability is critical to avoid information overload, which can significantly reduce the performance of the operator. The audio is used as the primary notification signal, while the video is used for further feedback, discrimination, localization and sensor fusion. The idea is to let the operator gets the feedback that he needs and enable him to look at the data in the most efficient way. We are also looking at a hybrid man-machine detection system which utilizes precise sweeping by the machine and powerful human cognitive ability. In such a hybrid system, the operator is free to concentrate on discriminant task, such as manually fusing the output of the different sensing modalities, instead of worrying about the proper sweep technique. In developing this concept, we have been using the virtual mien lane to validate some of these concepts. We obtained some very encouraging results form our preliminary test. It clearly shows that with the proper feedback, the performance of the operator can be improved significantly in a very short time.
A chronic generalized bi-directional brain-machine interface.
Rouse, A G; Stanslaski, S R; Cong, P; Jensen, R M; Afshar, P; Ullestad, D; Gupta, R; Molnar, G F; Moran, D W; Denison, T J
2011-06-01
A bi-directional neural interface (NI) system was designed and prototyped by incorporating a novel neural recording and processing subsystem into a commercial neural stimulator architecture. The NI system prototype leverages the system infrastructure from an existing neurostimulator to ensure reliable operation in a chronic implantation environment. In addition to providing predicate therapy capabilities, the device adds key elements to facilitate chronic research, such as four channels of electrocortigram/local field potential amplification and spectral analysis, a three-axis accelerometer, algorithm processing, event-based data logging, and wireless telemetry for data uploads and algorithm/configuration updates. The custom-integrated micropower sensor and interface circuits facilitate extended operation in a power-limited device. The prototype underwent significant verification testing to ensure reliability, and meets the requirements for a class CF instrument per IEC-60601 protocols. The ability of the device system to process and aid in classifying brain states was preclinically validated using an in vivo non-human primate model for brain control of a computer cursor (i.e. brain-machine interface or BMI). The primate BMI model was chosen for its ability to quantitatively measure signal decoding performance from brain activity that is similar in both amplitude and spectral content to other biomarkers used to detect disease states (e.g. Parkinson's disease). A key goal of this research prototype is to help broaden the clinical scope and acceptance of NI techniques, particularly real-time brain state detection. These techniques have the potential to be generalized beyond motor prosthesis, and are being explored for unmet needs in other neurological conditions such as movement disorders, stroke and epilepsy.
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
NASA Technical Reports Server (NTRS)
Malone, T. B.; Micocci, A.
1975-01-01
The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.
Identifying well-formed biomedical phrases in MEDLINE® text.
Kim, Won; Yeganova, Lana; Comeau, Donald C; Wilbur, W John
2012-12-01
In the modern world people frequently interact with retrieval systems to satisfy their information needs. Humanly understandable well-formed phrases represent a crucial interface between humans and the web, and the ability to index and search with such phrases is beneficial for human-web interactions. In this paper we consider the problem of identifying humanly understandable, well formed, and high quality biomedical phrases in MEDLINE documents. The main approaches used previously for detecting such phrases are syntactic, statistical, and a hybrid approach combining these two. In this paper we propose a supervised learning approach for identifying high quality phrases. First we obtain a set of known well-formed useful phrases from an existing source and label these phrases as positive. We then extract from MEDLINE a large set of multiword strings that do not contain stop words or punctuation. We believe this unlabeled set contains many well-formed phrases. Our goal is to identify these additional high quality phrases. We examine various feature combinations and several machine learning strategies designed to solve this problem. A proper choice of machine learning methods and features identifies in the large collection strings that are likely to be high quality phrases. We evaluate our approach by making human judgments on multiword strings extracted from MEDLINE using our methods. We find that over 85% of such extracted phrase candidates are humanly judged to be of high quality. Published by Elsevier Inc.
Advances in Machine Technology.
Clark, William R; Villa, Gianluca; Neri, Mauro; Ronco, Claudio
2018-01-01
Continuous renal replacement therapy (CRRT) machines have evolved into devices specifically designed for critically ill over the past 40 years. In this chapter, a brief history of this evolution is first provided, with emphasis on the manner in which changes have been made to address the specific needs of the critically ill patient with acute kidney injury. Subsequently, specific examples of technology developments for CRRT machines are discussed, including the user interface, pumps, pressure monitoring, safety features, and anticoagulation capabilities. © 2018 S. Karger AG, Basel.
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing.
Park, Ki-Woong; Lee, Younho; Baek, Sung Hoon
2017-08-08
In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing ( T-Wing ), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing , we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude.
Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.
2001-01-01
The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
Automation's Effect on Library Personnel.
ERIC Educational Resources Information Center
Dakshinamurti, Ganga
1985-01-01
Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…
Larson, Eric; Terry, Howard P; Canevari, Margaux M; Stepp, Cara E
2013-01-01
Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.
NASA Technical Reports Server (NTRS)
Abbott, Kathy H.; Schutte, Paul C.
1989-01-01
A development status evaluation is presented for the NASA-Langley Intelligent Cockpit Aids research program, which encompasses AI, human/machine interfaces, and conventional automation. Attention is being given to decision-aiding concepts for human-centered automation, with emphasis on inflight subsystem fault management, inflight mission replanning, and communications management. The cockpit envisioned is for advanced commercial transport aircraft.
Tactile feedback to the palm using arbitrarily shaped DEA
NASA Astrophysics Data System (ADS)
Mößinger, Holger; Haus, Henry; Kauer, Michaela; Schlaak, Helmut F.
2014-03-01
Tactile stimulation enhances user experience and efficiency in human machine interaction by providing information via another sensory channel to the human brain. DEA as tactile interfaces have been in the focus of research in recent years. Examples are (vibro-) tactile keyboards or Braille displays. These applications of DEA focus mainly on interfacing with the user's fingers or fingertips only - demonstrating the high spatial resolution achievable with DEA. Besides providing a high resolution, the flexibility of DEA also allows designing free form surfaces equipped with single actuators or actuator matrices which can be fitted to the surface of the human skin. The actuators can then be used to provide tactile stimuli to different areas of the body, not to the fingertips only. Utilizing and demonstrating this flexibility we designed a free form DEA pad shaped to fit into the inside of the human palm. This pad consists of four single actuators which can provide e.g. directional information such as left, right, up and down. To demonstrate the value of such free form actuators we manufactured a PC-mouse using 3d printing processes. The actuator pad is mounted on the back of the mouse, resting against the palm while operating it. Software on the PC allows control of the vibration patterns displayed by the actuators. This allows helping the user by raising attention to certain directions or by discriminating between different modes like "pick" or "manipulate". Results of first tests of the device show an improved user experience while operating the PC mouse.
Marzullo, T C; Dudley, J R; Miller, C R; Trejo, L; Kipke, D R
2005-01-01
Brain machine interface development typically falls into two arenas, invasive extracellular recording and non-invasive electroencephalogram recording methods. The relationship between action potentials and field potentials is not well understood, and investigation of interrelationships may improve design of neuroprosthetic control systems. Rats were trained on a motor learning task whereby they had to insert their noses into an aperture while simultaneously pressing down on levers with their forepaws; spikes, local field potentials (LFPs), and electrocorticograms (ECoGs) over the motor cortex were recorded and characterized. Preliminary results suggest that the LFP activity in lower cortical layers oscillates with the ECoG.
NASA Astrophysics Data System (ADS)
Johnson, Bradley; May, Gayle L.; Korn, Paula
A recent symposium produced papers in the areas of solar system exploration, man machine interfaces, cybernetics, virtual reality, telerobotics, life support systems and the scientific and technology spinoff from the NASA space program. A number of papers also addressed the social and economic impacts of the space program. For individual titles, see A95-87468 through A95-87479.
An evaluation of software tools for the design and development of cockpit displays
NASA Technical Reports Server (NTRS)
Ellis, Thomas D., Jr.
1993-01-01
The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Interactome INSIDER: a structural interactome browser for genomic studies.
Meyer, Michael J; Beltrán, Juan Felipe; Liang, Siqi; Fragoza, Robert; Rumack, Aaron; Liang, Jin; Wei, Xiaomu; Yu, Haiyuan
2018-01-01
We present Interactome INSIDER, a tool to link genomic variant information with structural protein-protein interactomes. Underlying this tool is the application of machine learning to predict protein interaction interfaces for 185,957 protein interactions with previously unresolved interfaces in human and seven model organisms, including the entire experimentally determined human binary interactome. Predicted interfaces exhibit functional properties similar to those of known interfaces, including enrichment for disease mutations and recurrent cancer mutations. Through 2,164 de novo mutagenesis experiments, we show that mutations of predicted and known interface residues disrupt interactions at a similar rate and much more frequently than mutations outside of predicted interfaces. To spur functional genomic studies, Interactome INSIDER (http://interactomeinsider.yulab.org) enables users to identify whether variants or disease mutations are enriched in known and predicted interaction interfaces at various resolutions. Users may explore known population variants, disease mutations, and somatic cancer mutations, or they may upload their own set of mutations for this purpose.
Design of a Single-Cell Positioning Controller Using Electroosmotic Flow and Image Processing
Ay, Chyung; Young, Chao-Wang; Chen, Jhong-Yin
2013-01-01
The objective of the current research was not only to provide a fast and automatic positioning platform for single cells, but also improved biomolecular manipulation techniques. In this study, an automatic platform for cell positioning using electroosmotic flow and image processing technology was designed. The platform was developed using a PCI image acquisition interface card for capturing images from a microscope and then transferring them to a computer using human-machine interface software. This software was designed by the Laboratory Virtual Instrument Engineering Workbench, a graphical language for finding cell positions and viewing the driving trace, and the fuzzy logic method for controlling the voltage or time of an electric field. After experiments on real human leukemic cells (U-937), the success of the cell positioning rate achieved by controlling the voltage factor reaches 100% within 5 s. A greater precision is obtained when controlling the time factor, whereby the success rate reaches 100% within 28 s. Advantages in both high speed and high precision are attained if these two voltage and time control methods are combined. The control speed with the combined method is about 5.18 times greater than that achieved by the time method, and the control precision with the combined method is more than five times greater than that achieved by the voltage method. PMID:23698272
Personal mobility and manipulation using robotics, artificial intelligence and advanced control.
Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu
2007-01-01
Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.
2003-04-01
Development vs . Iterative Design ............................ II-7 3. Getting to Know the User: Designing for Usability, Utility, and Pleasure...III-1 2. Terrain Focus .................................................................................... III-1 3. Display vs . Control...heterogeneous, and it diverged into broad philosophical issues, such as “design as engineering” vs . “design as art” and the utility of controlled
Giraudet, L; Imbert, J-P; Bérenger, M; Tremblay, S; Causse, M
2015-11-01
The Air Traffic Control (ATC) environment is complex and safety-critical. Whilst exchanging information with pilots, controllers must also be alert to visual notifications displayed on the radar screen (e.g., warning which indicates a loss of minimum separation between aircraft). Under the assumption that attentional resources are shared between vision and hearing, the visual interface design may also impact the ability to process these auditory stimuli. Using a simulated ATC task, we compared the behavioral and neural responses to two different visual notification designs--the operational alarm that involves blinking colored "ALRT" displayed around the label of the notified plane ("Color-Blink"), and the more salient alarm involving the same blinking text plus four moving yellow chevrons ("Box-Animation"). Participants performed a concurrent auditory task with the requirement to react to rare pitch tones. P300 from the occurrence of the tones was taken as an indicator of remaining attentional resources. Participants who were presented with the more salient visual design showed better accuracy than the group with the suboptimal operational design. On a physiological level, auditory P300 amplitude in the former group was greater than that observed in the latter group. One potential explanation is that the enhanced visual design freed up attentional resources which, in turn, improved the cerebral processing of the auditory stimuli. These results suggest that P300 amplitude can be used as a valid estimation of the efficiency of interface designs, and of cognitive load more generally. Copyright © 2015 Elsevier B.V. All rights reserved.
Man-machine interfaces in LACIE/ERIPS
NASA Technical Reports Server (NTRS)
Duprey, B. B. (Principal Investigator)
1979-01-01
One of the most important aspects of the interactive portion of the LACIE/ERIPS software system is the way in which the analysis and decision-making capabilities of a human being are integrated with the speed and accuracy of a computer to produce a powerful analysis system. The three major man-machine interfaces in the system are (1) the use of menus for communications between the software and the interactive user; (2) the checkpoint/restart facility to recreate in one job the internal environment achieved in an earlier one; and (3) the error recovery capability which would normally cause job termination. This interactive system, which executes on an IBM 360/75 mainframe, was adapted for use in noninteractive (batch) mode. A case study is presented to show how the interfaces work in practice by defining some fields based on an image screen display, noting the field definitions, and obtaining a film product of the classification map.
Development of sensitized pick coal interface detector system
NASA Technical Reports Server (NTRS)
Burchill, R. F.
1982-01-01
One approach for detection of the coal interface is measurement of pick cutting loads and shock through the use of pick strain gage load cells and accelerometers. The cutting drum of a long wall mining machine contains a number of cutting picks. In order to measure pick loads and shocks, one pick was instrumented and telemetry used to transmit the signals from the drum to an instrument-type tape recorder. A data system using FM telemetry was designed to transfer cutting bit load and shock information from the drum of a longwall shearer coal mining machine to a chassis mounted data recorder. The design of components in the test data system were finalized, the required instruments were assembled, the instrument system was evaluated in an above-ground simulation test, and an underground test series to obtain tape recorded sensor data was conducted.
The Body-Machine Interface: A new perspective on an old theme
Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465
Systems Engineering and Integration for Advanced Life Support System and HST
NASA Technical Reports Server (NTRS)
Kamarani, Ali K.
2005-01-01
Systems engineering (SE) discipline has revolutionized the way engineers and managers think about solving issues related to design of complex systems: With continued development of state-of-the-art technologies, systems are becoming more complex and therefore, a systematic approach is essential to control and manage their integrated design and development. This complexity is driven from integration issues. In this case, subsystems must interact with one another in order to achieve integration objectives, and also achieve the overall system's required performance. Systems engineering process addresses these issues at multiple levels. It is a technology and management process dedicated to controlling all aspects of system life cycle to assure integration at all levels. The Advanced Integration Matrix (AIM) project serves as the systems engineering and integration function for the Human Support Technology (HST) program. AIM provides means for integrated test facilities and personnel for performance trade studies, analyses, integrated models, test results, and validated requirements of the integration of HST. The goal of AIM is to address systems-level integration issues for exploration missions. It will use an incremental systems integration approach to yield technologies, baselines for further development, and possible breakthrough concepts in the areas of technological and organizational interfaces, total information flow, system wide controls, technical synergism, mission operations protocols and procedures, and human-machine interfaces.
The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.
Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan
2015-04-01
In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.
Seating Considerations for Spaceflight: The Human to Machine Interface
NASA Astrophysics Data System (ADS)
Gohmert, D. M.
2012-01-01
Seating is one of the most critical components to be considered during design of a spacecraft. Since seats are the final interface between the occupant and the vehicle wherein all launch and landing operations are performed, significant effort must be spent to ensure proper integration of the human to the spacecraft. The importance of seating can be divided into two categories: seat layout and seat design. The layout of the seats drives the overall cabin configuration - from displays and controls, to windows, to stowage, to egress paths. Since the layout of the seats is such a critical design parameter within the crew compartment, it is one of the first design challenges that must be completed in the critical path of the spacecraft design. In consideration of seat layout in the vehicle, it is important for the designers to account for often intangible factors such as safety, operability, contingency performance, and crew rescue. Seat layout will lead to definition of the quantity, shape, and posture of the seats. The seats of the craft must restrain and protect the occupant in all seated phases of flight, while allowing for nominal mission performance. In design of a spacecraft seat, the general posture of the occupant and the landing loads to be encountered are the greatest drivers of overall design. Variances, such as upright versus recumbent postures will dictate fit of the seat to the occupant and drive the total envelope of the seat around the occupant. Seat design revolves around applying sound principles of seated occupant protection coupled with the unique environments driven by the seat layout, landing loads, and operational and emergency scenarios.
A wearable exoskeleton suit for motion assistance to paralysed patients.
Chen, Bing; Zhong, Chun-Hao; Zhao, Xuan; Ma, Hao; Guan, Xiao; Li, Xi; Liang, Feng-Yan; Cheng, Jack Chun Yiu; Qin, Ling; Law, Sheung-Wai; Liao, Wei-Hsin
2017-10-01
The number of patients paralysed due to stroke, spinal cord injury, or other related diseases is increasing. In order to improve the physical and mental health of these patients, robotic devices that can help them to regain the mobility to stand and walk are highly desirable. The aim of this study is to develop a wearable exoskeleton suit to help paralysed patients regain the ability to stand up/sit down (STS) and walk. A lower extremity exoskeleton named CUHK-EXO was developed with considerations of ergonomics, user-friendly interface, safety, and comfort. The mechanical structure, human-machine interface, reference trajectories of the exoskeleton hip and knee joints, and control architecture of CUHK-EXO were designed. Clinical trials with a paralysed patient were performed to validate the effectiveness of the whole system design. With the assistance provided by CUHK-EXO, the paralysed patient was able to STS and walk. As designed, the actual joint angles of the exoskeleton well followed the designed reference trajectories, and assistive torques generated from the exoskeleton actuators were able to support the patient's STS and walking motions. The whole system design of CUHK-EXO is effective and can be optimised for clinical application. The exoskeleton can provide proper assistance in enabling paralysed patients to STS and walk.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Farooq, Mohammad U.
1986-01-01
The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.
NASA Astrophysics Data System (ADS)
Roy, Jean; Breton, Richard; Paradis, Stephane
2001-08-01
Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.
A Concept for Optimizing Behavioural Effectiveness & Efficiency
NASA Astrophysics Data System (ADS)
Barca, Jan Carlo; Rumantir, Grace; Li, Raymond
Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Fink, D. Hill, J. O'Hara
2004-11-30
Nuclear plant operators face a significant challenge designing and modifying control rooms. This report provides guidance on planning, designing, implementing and operating modernized control rooms and digital human-system interfaces.
An Architectural Experience for Interface Design
ERIC Educational Resources Information Center
Gong, Susan P.
2016-01-01
The problem of human-computer interface design was brought to the foreground with the emergence of the personal computer, the increasing complexity of electronic systems, and the need to accommodate the human operator in these systems. With each new technological generation discovering the interface design problems of its own technologies, initial…
Hitts Law? A test of the relationship between information load and movement precision
NASA Technical Reports Server (NTRS)
Zaleski, M.; Moray, N.
1986-01-01
Recent technological developments have made viable a man-machine interface heavily dependent on graphics and pointing devices. This has led to new interest in classical reaction and movement time work by Human Factors specialists. Two experiments were designed and run to test the dependence of target capture time on information load (Hitt's Law) and movement precision (Fitts' Law). The proposed model linearly combines Hitt's and Fitts' results into a combination law which then might be called Hitts' Law. Subjects were required to react to stimuli by manipulating a joystick so as to cause a cursor to capture a target on a CRT screen. Response entropy and the relative precision of the capture movement were crossed in a factorial design and data obtained that were found to support the model.
Design of virtual SCADA simulation system for pressurized water reactor
NASA Astrophysics Data System (ADS)
Wijaksono, Umar; Abdullah, Ade Gafar; Hakim, Dadang Lukman
2016-02-01
The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles of energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.
Diamond turning machine controller implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrard, K.P.; Taylor, L.W.; Knight, B.F.
The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, themore » control computer hardware and software, are discussed in detail below.« less
NASA Astrophysics Data System (ADS)
Zhenghui, Zhao
2018-04-01
Based on the context of increasingly serious aging problem in China, the psychological characteristics of elders in using public self-service facilities and the development status and the future trend of public self-service ticketing service. The approach is analysing physiological and psychological characteristics, education level of the elderly and studying its characteristics of consumer psychology and regional cultural characteristics profoundly before conducting comprehensive analysis and research in combination with the interface features of public self-service ticketing machine. The interface design will be more personalized, intelligent, regional and international. Strategies of caring for the elderly in the regional public self-service facility interface design innovation develops the concept of taking care of the elderly in the entire region as an indispensable people-benefiting optimization system in the modern social services.
Use of Computer Speech Technologies To Enhance Learning.
ERIC Educational Resources Information Center
Ferrell, Joe
1999-01-01
Discusses the design of an innovative learning system that uses new technologies for the man-machine interface, incorporating a combination of Automatic Speech Recognition (ASR) and Text To Speech (TTS) synthesis. Highlights include using speech technologies to mimic the attributes of the ideal tutor and design features. (AEF)
NASA Astrophysics Data System (ADS)
Stieglitz, Thomas
2009-05-01
Implantable medical devices to interface with muscles, peripheral nerves, and the brain have been developed for many applications over the last decades. They have been applied in fundamental neuroscientific studies as well as in diagnosis, therapy and rehabilitation in clinical practice. Success stories of these implants have been written with help of precision mechanics manufacturing techniques. Latest cutting edge research approaches to restore vision in blind persons and to develop an interface with the human brain as motor control interface, however, need more complex systems and larger scales of integration and higher degrees of miniaturization. Microsystems engineering offers adequate tools, methods, and materials but so far, no MEMS based active medical device has been transferred into clinical practice. Silicone rubber, polyimide, parylene as flexible materials and silicon and alumina (aluminum dioxide ceramics) as substrates and insulation or packaging materials, respectively, and precious metals as electrodes have to be combined to systems that do not harm the biological target structure and have to work reliably in a wet environment with ions and proteins. Here, different design, manufacturing and packaging paradigms will be presented and strengths and drawbacks will be discussed in close relation to the envisioned biological and medical applications.
Neurosurgery and the dawning age of Brain-Machine Interfaces
Rowland, Nathan C.; Breshears, Jonathan; Chang, Edward F.
2013-01-01
Brain–machine interfaces (BMIs) are on the horizon for clinical neurosurgery. Electrocorticography-based platforms are less invasive than implanted microelectrodes, however, the latter are unmatched in their ability to achieve fine motor control of a robotic prosthesis capable of natural human behaviors. These technologies will be crucial to restoring neural function to a large population of patients with severe neurologic impairment – including those with spinal cord injury, stroke, limb amputation, and disabling neuromuscular disorders such as amyotrophic lateral sclerosis. On the opposite end of the spectrum are neural enhancement technologies for specialized applications such as combat. An ongoing ethical dialogue is imminent as we prepare for BMI platforms to enter the neurosurgical realm of clinical management. PMID:23653884
Conductive fiber-based ultrasensitive textile pressure sensor for wearable electronics.
Lee, Jaehong; Kwon, Hyukho; Seo, Jungmok; Shin, Sera; Koo, Ja Hoon; Pang, Changhyun; Son, Seungbae; Kim, Jae Hyung; Jang, Yong Hoon; Kim, Dae Eun; Lee, Taeyoon
2015-04-17
A flexible and sensitive textile-based pressure sensor is developed using highly conductive fibers coated with dielectric rubber materials. The pressure sensor exhibits superior sensitivity, very fast response time, and high stability, compared with previous textile-based pressure sensors. By using a weaving method, the pressure sensor can be applied to make smart gloves and clothes that can control machines wirelessly as human-machine interfaces. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
Donati, Ana R C; Shokur, Solaiman; Morya, Edgard; Campos, Debora S F; Moioli, Renan C; Gitti, Claudia M; Augusto, Patricia B; Tripodi, Sandra; Pires, Cristhiane G; Pereira, Gislaine A; Brasil, Fabricio L; Gallo, Simone; Lin, Anthony A; Takigami, Angelo K; Aratanha, Maria A; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A L
2016-08-11
Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3-13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage.
Donati, Ana R. C.; Shokur, Solaiman; Morya, Edgard; Campos, Debora S. F.; Moioli, Renan C.; Gitti, Claudia M.; Augusto, Patricia B.; Tripodi, Sandra; Pires, Cristhiane G.; Pereira, Gislaine A.; Brasil, Fabricio L.; Gallo, Simone; Lin, Anthony A.; Takigami, Angelo K.; Aratanha, Maria A.; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A. L.
2016-01-01
Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3–13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage. PMID:27513629
Brain-machine interfaces: electrophysiological challenges and limitations.
Lega, Bradley C; Serruya, Mijail D; Zaghloul, Kareem A
2011-01-01
Brain-machine interfaces (BMI) seek to directly communicate with the human nervous system in order to diagnose and treat intrinsic neurological disorders. While the first generation of these devices has realized significant clinical successes, they often rely on gross electrical stimulation using empirically derived parameters through open-loop mechanisms of action that are not yet fully understood. Their limitations reflect the inherent challenge in developing the next generation of these devices. This review identifies lessons learned from the first generation of BMI devices (chiefly deep brain stimulation), identifying key problems for which the solutions will aid the development of the next generation of technologies. Our analysis examines four hypotheses for the mechanism by which brain stimulation alters surrounding neurophysiologic activity. We then focus on motor prosthetics, describing various approaches to overcoming the problems of decoding neural signals. We next turn to visual prosthetics, an area for which the challenges of signal coding to match neural architecture has been partially overcome. Finally, we close with a review of cortical stimulation, examining basic principles that will be incorporated into the design of future devices. Throughout the review, we relate the issues of each specific topic to the common thread of BMI research: translating new knowledge of network neuroscience into improved devices for neuromodulation.
Gesture-controlled interfaces for self-service machines and other applications
NASA Technical Reports Server (NTRS)
Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)
2004-01-01
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
Design and development of an IoT-based web application for an intelligent remote SCADA system
NASA Astrophysics Data System (ADS)
Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long
2018-03-01
This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.
Intravascular Neural Interface with Nanowire Electrode
Watanabe, Hirobumi; Takahashi, Hirokazu; Nakao, Masayuki; Walton, Kerry; Llinás, Rodolfo R.
2010-01-01
Summary A minimally invasive electrical recording and stimulating technique capable of simultaneously monitoring the activity of a significant number (e.g., 103 to 104) of neurons is an absolute prerequisite in developing an effective brain–machine interface. Although there are many excellent methodologies for recording single or multiple neurons, there has been no methodology for accessing large numbers of cells in a behaving experimental animal or human individual. Brain vascular parenchyma is a promising candidate for addressing this problem. It has been proposed [1, 2] that a multitude of nanowire electrodes introduced into the central nervous system through the vascular system to address any brain area may be a possible solution. In this study we implement a design for such microcatheter for ex vivo experiments. Using Wollaston platinum wire, we design a submicron-scale electrode and develop a fabrication method. We then evaluate the mechanical properties of the electrode in a flow when passing through the intricacies of the capillary bed in ex vivo Xenopus laevis experiments. Furthermore, we demonstrate the feasibility of intravascular recording in the spinal cord of Xenopus laevis. PMID:21572940
Implementation of medical monitor system based on networks
NASA Astrophysics Data System (ADS)
Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi
2006-11-01
In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.
Running With an Elastic Lower Limb Exoskeleton.
Cherry, Michael S; Kota, Sridhar; Young, Aaron; Ferris, Daniel P
2016-06-01
Although there have been many lower limb robotic exoskeletons that have been tested for human walking, few devices have been tested for assisting running. It is possible that a pseudo-passive elastic exoskeleton could benefit human running without the addition of electrical motors due to the spring-like behavior of the human leg. We developed an elastic lower limb exoskeleton that added stiffness in parallel with the entire lower limb. Six healthy, young subjects ran on a treadmill at 2.3 m/s with and without the exoskeleton. Although the exoskeleton was designed to provide ~50% of normal leg stiffness during running, it only provided 24% of leg stiffness during testing. The difference in added leg stiffness was primarily due to soft tissue compression and harness compliance decreasing exoskeleton displacement during stance. As a result, the exoskeleton only supported about 7% of the peak vertical ground reaction force. There was a significant increase in metabolic cost when running with the exoskeleton compared with running without the exoskeleton (ANOVA, P < .01). We conclude that 2 major roadblocks to designing successful lower limb robotic exoskeletons for human running are human-machine interface compliance and the extra lower limb inertia from the exoskeleton.
Assessing the Usability of Six Data Entry Mobile Interfaces for Caregivers: A Randomized Trial.
Ehrler, Frederic; Haller, Guy; Sarrey, Evelyne; Walesa, Magali; Wipfli, Rolf; Lovis, Christian
2015-12-15
There is an increased demand in hospitals for tools, such as dedicated mobile device apps, that enable the recording of clinical information in an electronic format at the patient's bedside. Although the human-machine interface design on mobile devices strongly influences the accuracy and effectiveness of data recording, there is still a lack of evidence as to which interface design offers the best guarantee for ease of use and quality of recording. Therefore, interfaces need to be assessed both for usability and reliability because recording errors can seriously impact the overall level of quality of the data and affect the care provided. In this randomized crossover trial, we formally compared 6 handheld device interfaces for both speed of data entry and accuracy of recorded information. Three types of numerical data commonly recorded at the patient's bedside were used to evaluate the interfaces. In total, 150 health care professionals from the University Hospitals of Geneva volunteered to record a series of randomly generated data on each of the 6 interfaces provided on a smartphone. The interfaces were presented in a randomized order as part of fully automated data entry scenarios. During the data entry process, accuracy and effectiveness were automatically recorded by the software. Various types of errors occurred, which ranged from 0.7% for the most reliable design to 18.5% for the least reliable one. The length of time needed for data recording ranged from 2.81 sec to 14.68 sec, depending on the interface. The numeric keyboard interface delivered the best performance for pulse data entry with a mean time of 3.08 sec (SD 0.06) and an accuracy of 99.3%. Our study highlights the critical impact the choice of an interface can have on the quality of recorded data. Selecting an interface should be driven less by the needs of specific end-user groups or the necessity to facilitate the developer's task (eg, by opting for default solutions provided by commercial platforms) than by the level of speed and accuracy an interface can provide for recording information. An important effort must be made to properly validate mobile device interfaces intended for use in the clinical setting. In this regard, our study identified the numeric keyboard, among the proposed designs, as the most accurate interface for entering specific numerical values. This is an important step toward providing clearer guidelines on which interface to choose for the appropriate use of handheld device interfaces in the health care setting.
Assessing the Usability of Six Data Entry Mobile Interfaces for Caregivers: A Randomized Trial
Haller, Guy; Sarrey, Evelyne; Walesa, Magali; Wipfli, Rolf; Lovis, Christian
2015-01-01
Background There is an increased demand in hospitals for tools, such as dedicated mobile device apps, that enable the recording of clinical information in an electronic format at the patient’s bedside. Although the human-machine interface design on mobile devices strongly influences the accuracy and effectiveness of data recording, there is still a lack of evidence as to which interface design offers the best guarantee for ease of use and quality of recording. Therefore, interfaces need to be assessed both for usability and reliability because recording errors can seriously impact the overall level of quality of the data and affect the care provided. Objective In this randomized crossover trial, we formally compared 6 handheld device interfaces for both speed of data entry and accuracy of recorded information. Three types of numerical data commonly recorded at the patient’s bedside were used to evaluate the interfaces. Methods In total, 150 health care professionals from the University Hospitals of Geneva volunteered to record a series of randomly generated data on each of the 6 interfaces provided on a smartphone. The interfaces were presented in a randomized order as part of fully automated data entry scenarios. During the data entry process, accuracy and effectiveness were automatically recorded by the software. Results Various types of errors occurred, which ranged from 0.7% for the most reliable design to 18.5% for the least reliable one. The length of time needed for data recording ranged from 2.81 sec to 14.68 sec, depending on the interface. The numeric keyboard interface delivered the best performance for pulse data entry with a mean time of 3.08 sec (SD 0.06) and an accuracy of 99.3%. Conclusions Our study highlights the critical impact the choice of an interface can have on the quality of recorded data. Selecting an interface should be driven less by the needs of specific end-user groups or the necessity to facilitate the developer’s task (eg, by opting for default solutions provided by commercial platforms) than by the level of speed and accuracy an interface can provide for recording information. An important effort must be made to properly validate mobile device interfaces intended for use in the clinical setting. In this regard, our study identified the numeric keyboard, among the proposed designs, as the most accurate interface for entering specific numerical values. This is an important step toward providing clearer guidelines on which interface to choose for the appropriate use of handheld device interfaces in the health care setting. PMID:27025648
ERIC Educational Resources Information Center
Sandler, Mark
1985-01-01
Discusses several concerns about nature of online public access catalogs (OPAC) that have particular import to reference librarians: user passivity and loss of control growing out of "human-machine interface" and the larger social context; and the tendency of computerized bibliographic systems to obfuscate human origins of library…
Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements
NASA Astrophysics Data System (ADS)
Sato, Naoyuki; Yamaguchi, Yoko
Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.
NASA Astrophysics Data System (ADS)
Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten
2012-08-01
A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrubiak, Rostislav; Sinogeikin, Stanislav; Rod, Eric
We have designed and constructed a new system for micro-machining parts and sample assemblies used for diamond anvil cells and general user operations at the High Pressure Collaborative Access Team, sector 16 of the Advanced Photon Source. The new micro-machining system uses a pulsed laser of 400 ps pulse duration, ablating various materials without thermal melting, thus leaving a clean edge. With optics designed for a tight focus, the system can machine holes any size larger than 3 μm in diameter. Unlike a standard electrical discharge machining drill, the new laser system allows micro-machining of non-conductive materials such as: amorphousmore » boron and silicon carbide gaskets, diamond, oxides, and other materials including organic materials such as polyimide films (i.e., Kapton). An important feature of the new system is the use of gas-tight or gas-flow environmental chambers which allow the laser micro-machining to be done in a controlled (e.g., inert gas) atmosphere to prevent oxidation and other chemical reactions in air sensitive materials. The gas-tight workpiece enclosure is also useful for machining materials with known health risks (e.g., beryllium). Specialized control software with a graphical interface enables micro-machining of custom 2D and 3D shapes. The laser-machining system was designed in a Class 1 laser enclosure, i.e., it includes laser safety interlocks and computer controls and allows for routine operation. Though initially designed mainly for machining of the diamond anvil cell gaskets, the laser-machining system has since found many other micro-machining applications, several of which are presented here.« less
Battery electric vehicles - implications for the driver interface.
Neumann, Isabel; Krems, Josef F
2016-03-01
The current study examines the human-machine interface of a battery electric vehicle (BEV) from a user-perspective, focussing on the evaluation of BEV-specific displays, the relevance of provided information and challenges for drivers due to the concept of electricity in a road vehicle. A sample of 40 users drove a BEV for 6 months. Data were gathered at three points of data collection. Participants perceived the BEV-specific displays as only moderately reliable and helpful for estimating the displayed parameters. This was even less the case after driving the BEV for 3 months. A taxonomy of user requirements was compiled revealing the need for improved and additional information, especially regarding energy consumption and efficiency. Drivers had difficulty understanding electrical units and the energy consumption of the BEV. On the background of general principles for display design, results provide implications how to display relevant information and how to facilitate drivers' understanding of energy consumption in BEVs. Practitioner Summary: Battery electric vehicle (BEV) displays need to incorporate new information. A taxonomy of user requirements was compiled revealing the need for improved and additional information in the BEV interface. Furthermore, drivers had trouble understanding electrical units and energy consumption; therefore, appropriate assistance is required. Design principles which are specifically important in the BEV context are discussed.
Cognitive Awareness Prototype Development on User Interface Design
ERIC Educational Resources Information Center
Rosli, D'oria Islamiah
2015-01-01
Human error is a crucial problem in manufacturing industries. Due to the misinterpretation of information on interface system design, accidents or death may occur at workplace. Lack of human cognition criteria in interface system design is also one of the contributions to the failure in using the system effectively. Therefore, this paper describes…
NASA Astrophysics Data System (ADS)
Yamada, Naoya; Wada, Masato; Kabir, M. Hasnat; Gong, Jin; Furukawa, Hidemitsu
2013-03-01
Gels are soft and wet materials that differ from hard and dry materials like metals, plastics and ceramics. These have some unique characteristic such as low frictional properties, high water content and materials permeability. A decade earlier, DN gels having a mechanical strength of 30MPa of the maximum breaking stress in compression was developed and it is a prospective material as the biomaterial of the human body. Indeed it frictional coefficient and mechanical strength are comparable to our cartilages. In this study, we focus on the dynamic frictional interface of hydrogels and aim to develop a new apparatus with a polarization microscope for observation. The dynamical interface is observed by the friction of gel and glass with hudroxypropylcellulose (HPC) polymer solution sandwiching. At the beginning, we rubbed hydrogel and glass with HPC solution sandwiching on stage of polarization microscope. Second step, we designed a new system which combined microscope with friction measuring machine. The comparison between direct observation with this instrument and measurement of friction coefficient will become a foothold to elucidate distinctive frictional phenomena that can be seen in soft and wet materials.
Theoretical study on the top- and enclosed-contacted single-layer MoS{sub 2} piezotronic transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wei, E-mail: wliu@binn.cas.cn, E-mail: zlwang@gatech.edu; Zhou, Yongli; Zhang, Aihua
Recently, the piezotronic effect has been observed in two-dimensional single-layer MoS{sub 2} materials, which have potential applications in force and pressure triggered or controlled electronic devices, sensors, and human-machine interfaces. However, classical theory faces the difficulty in explaining the mechanism of the piezotronic effect for the top- and enclosed-contacted MoS{sub 2} transistors, since the piezoelectric charges are assumed to exist only at the edge of the MoS{sub 2} flake that is far from the electronic transport pathway. In the present study, we identify the piezoelectric charges at the MoS{sub 2}/metal-MoS{sub 2} interface by employing both the density functional theory andmore » finite element method simulations. This interface is on the transport pathway of both top- and enclosed-contacted MoS{sub 2} transistors, thus it is capable of controlling their transport properties. This study deepens the understanding of piezotronic effect and provides guidance for the design of two-dimensional piezotronic devices.« less
NASA Technical Reports Server (NTRS)
Karl, D. R.
1972-01-01
An evaluation was made of the feasibility of utilizing a simplified man machine interface concept to manage and control a complex space system involving multiple redundant computers that control multiple redundant subsystems. The concept involves the use of a CRT for display and a simple keyboard for control, with a tree-type control logic for accessing and controlling mission, systems, and subsystem elements. The concept was evaluated in terms of the Phase B space shuttle orbiter, to utilize the wide scope of data management and subsystem control inherent in the central data management subsystem provided by the Phase B design philosophy. Results of these investigations are reported in four volumes.
NASA Astrophysics Data System (ADS)
Abbott, W. W.; Faisal, A. A.
2012-08-01
Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.
Barz, F; Livi, A; Lanzilotto, M; Maranesi, M; Bonini, L; Paul, O; Ruther, P
2017-06-01
Application-specific designs of electrode arrays offer an improved effectiveness for providing access to targeted brain regions in neuroscientific research and brain machine interfaces. The simultaneous and stable recording of neuronal ensembles is the main goal in the design of advanced neural interfaces. Here, we describe the development and assembly of highly customizable 3D microelectrode arrays and demonstrate their recording performance in chronic applications in non-human primates. System assembly relies on a microfabricated stacking component that is combined with Michigan-style silicon-based electrode arrays interfacing highly flexible polyimide cables. Based on the novel stacking component, the lead time for implementing prototypes with altered electrode pitches is minimal. Once the fabrication and assembly accuracy of the stacked probes have been characterized, their recording performance is assessed during in vivo chronic experiments in awake rhesus macaques (Macaca mulatta) trained to execute reaching-grasping motor tasks. Using a single set of fabrication tools, we implemented three variants of the stacking component for electrode distances of 250, 300 and 350 µm in the stacking direction. We assembled neural probes with up to 96 channels and an electrode density of 98 electrodes mm -2 . Furthermore, we demonstrate that the shank alignment is accurate to a few µm at an angular alignment better than 1°. Three 64-channel probes were chronically implanted in two monkeys providing single-unit activity on more than 60% of all channels and excellent recording stability. Histological tissue sections, obtained 52 d after implantation from one of the monkeys, showed minimal tissue damage, in accordance with the high quality and stability of the recorded neural activity. The versatility of our fabrication and assembly approach should significantly support the development of ideal interface geometries for a broad spectrum of applications. With the demonstrated performance, these probes are suitable for both semi-chronic and chronic applications.
NASA Astrophysics Data System (ADS)
Barz, F.; Livi, A.; Lanzilotto, M.; Maranesi, M.; Bonini, L.; Paul, O.; Ruther, P.
2017-06-01
Objective. Application-specific designs of electrode arrays offer an improved effectiveness for providing access to targeted brain regions in neuroscientific research and brain machine interfaces. The simultaneous and stable recording of neuronal ensembles is the main goal in the design of advanced neural interfaces. Here, we describe the development and assembly of highly customizable 3D microelectrode arrays and demonstrate their recording performance in chronic applications in non-human primates. Approach. System assembly relies on a microfabricated stacking component that is combined with Michigan-style silicon-based electrode arrays interfacing highly flexible polyimide cables. Based on the novel stacking component, the lead time for implementing prototypes with altered electrode pitches is minimal. Once the fabrication and assembly accuracy of the stacked probes have been characterized, their recording performance is assessed during in vivo chronic experiments in awake rhesus macaques (Macaca mulatta) trained to execute reaching-grasping motor tasks. Main results. Using a single set of fabrication tools, we implemented three variants of the stacking component for electrode distances of 250, 300 and 350 µm in the stacking direction. We assembled neural probes with up to 96 channels and an electrode density of 98 electrodes mm-2. Furthermore, we demonstrate that the shank alignment is accurate to a few µm at an angular alignment better than 1°. Three 64-channel probes were chronically implanted in two monkeys providing single-unit activity on more than 60% of all channels and excellent recording stability. Histological tissue sections, obtained 52 d after implantation from one of the monkeys, showed minimal tissue damage, in accordance with the high quality and stability of the recorded neural activity. Significance. The versatility of our fabrication and assembly approach should significantly support the development of ideal interface geometries for a broad spectrum of applications. With the demonstrated performance, these probes are suitable for both semi-chronic and chronic applications.
Cognitive Foundry v. 3.0 (OSS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basilico, Justin; Dixon, Kevin; McClain, Jonathan
2009-11-18
The Cognitive Foundry is a unified collection of tools designed for research and applications that use cognitive modeling, machine learning, or pattern recognition. The software library contains design patterns, interface definitions, and default implementations of reusable software components and algorithms designed to support a wide variety of research and development needs. The library contains three main software packages: the Common package that contains basic utilities and linear algebraic methods, the Cognitive Framework package that contains tools to assist in implementing and analyzing theories of cognition, and the Machine Learning package that provides general algorithms and methods for populating Cognitive Frameworkmore » components from domain-relevant data.« less
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
Energy-Efficient Hosting Rich Content from Mobile Platforms with Relative Proximity Sensing
Baek, Sung Hoon
2017-01-01
In this paper, we present a tiny networked mobile platform, termed Tiny-Web-Thing (T-Wing), which allows the sharing of data-intensive content among objects in cyber physical systems. The object includes mobile platforms like a smartphone, and Internet of Things (IoT) platforms for Human-to-Human (H2H), Human-to-Machine (H2M), Machine-to-Human (M2H), and Machine-to-Machine (M2M) communications. T-Wing makes it possible to host rich web content directly on their objects, which nearby objects can access instantaneously. Using a new mechanism that allows the Wi-Fi interface of the object to be turned on purely on-demand, T-Wing achieves very high energy efficiency. We have implemented T-Wing on an embedded board, and present evaluation results from our testbed. From the evaluation result of T-Wing, we compare our system against alternative approaches to implement this functionality using only the cellular or Wi-Fi (but not both), and show that in typical usage, T-Wing consumes less than 15× the energy and is faster by an order of magnitude. PMID:28786942
WTEC panel report on European nuclear instrumentation and controls
NASA Technical Reports Server (NTRS)
White, James D.; Lanning, David D.; Beltracchi, Leo; Best, Fred R.; Easter, James R.; Oakes, Lester C.; Sudduth, A. L.
1991-01-01
Control and instrumentation systems might be called the 'brain' and 'senses' of a nuclear power plant. As such they become the key elements in the integrated operation of these plants. Recent developments in digital equipment have allowed a dramatic change in the design of these instrument and control (I&C) systems. New designs are evolving with cathode ray tube (CRT)-based control rooms, more automation, and better logical information for the human operators. As these new advanced systems are developed, various decisions must be made about the degree of automation and the human-to-machine interface. Different stages of the development of control automation and of advanced digital systems can be found in various countries. The purpose of this technology assessment is to make a comparative evaluation of the control and instrumentation systems that are being used for commercial nuclear power plants in Europe and the United States. This study is limited to pressurized water reactors (PWR's). Part of the evaluation includes comparisons with a previous similar study assessing Japanese technology.
Traffic Aware Planner for Cockpit-Based Trajectory Optimization
NASA Technical Reports Server (NTRS)
Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.
2016-01-01
The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.
Strauss, G; Winkler, D; Jacobs, S; Trantakis, C; Dietz, A; Bootz, F; Meixensberger, J; Falk, V
2005-07-01
This study examines the advantages and disadvantages of a commercial telemanipulator system (daVinci, Intuitive Surgical, USA) with computer-guided instruments in functional endoscopic sinus surgery (FESS). We performed five different surgical FESS steps on 14 anatomical preparation and compared them with conventional FESS. A total of 140 procedures were examined taking into account the following parameters: degrees of freedom (DOF), duration , learning curve, force feedback, human-machine-interface. Telemanipulatory instruments have more DOF available then conventional instrumentation in FESS. The average time consumed by configuration of the telemanipulator is around 9+/-2 min. Missing force feedback is evaluated mainly as a disadvantage of the telemanipulator. Scaling was evaluated as helpful. The ergonomic concept seems to be better than the conventional solution. Computer guided instruments showed better results for the available DOF of the instruments. The human-machine-interface is more adaptable and variable then in conventional instrumentation. Motion scaling and indexing are characteristics of the telemanipulator concept which are helpful for FESS in our study.
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.
Human factors with nonhumans - Factors that affect computer-task performance
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
A user interface for a knowledge-based planning and scheduling system
NASA Technical Reports Server (NTRS)
Mulvehill, Alice M.
1988-01-01
The objective of EMPRESS (Expert Mission Planning and Replanning Scheduling System) is to support the planning and scheduling required to prepare science and application payloads for flight aboard the US Space Shuttle. EMPRESS was designed and implemented in Zetalisp on a 3600 series Symbolics Lisp machine. Initially, EMPRESS was built as a concept demonstration system. The system has since been modified and expanded to ensure that the data have integrity. Issues underlying the design and development of the EMPRESS-I interface, results from a system usability assessment, and consequent modifications are described.
Advanced system functions for the office information system
NASA Astrophysics Data System (ADS)
Ishikawa, Tetsuya
First, author describes the functions needed for information management system in office. Next, he mentions the requisites for the enhancement of system functions. In order to make enhancement of system functions, he states, it is necessary to examine them comprehensively from every point of view including processing hour and cost. In this paper, he concentrates on the enhancement of man-machine interface (= human interface), that is, how to make system easy to use for the office workers.
Designing Anticancer Peptides by Constructive Machine Learning.
Grisoni, Francesca; Neuhaus, Claudia S; Gabernet, Gisela; Müller, Alex T; Hiss, Jan A; Schneider, Gisbert
2018-04-21
Constructive (generative) machine learning enables the automated generation of novel chemical structures without the need for explicit molecular design rules. This study presents the experimental application of such a deep machine learning model to design membranolytic anticancer peptides (ACPs) de novo. A recurrent neural network with long short-term memory cells was trained on α-helical cationic amphipathic peptide sequences and then fine-tuned with 26 known ACPs by transfer learning. This optimized model was used to generate unique and novel amino acid sequences. Twelve of the peptides were synthesized and tested for their activity on MCF7 human breast adenocarcinoma cells and selectivity against human erythrocytes. Ten of these peptides were active against cancer cells. Six of the active peptides killed MCF7 cancer cells without affecting human erythrocytes with at least threefold selectivity. These results advocate constructive machine learning for the automated design of peptides with desired biological activities. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A haptic interface for virtual simulation of endoscopic surgery.
Rosenberg, L B; Stredney, D
1996-01-01
Virtual reality can be described as a convincingly realistic and naturally interactive simulation in which the user is given a first person illusion of being immersed within a computer generated environment While virtual reality systems offer great potential to reduce the cost and increase the quality of medical training, many technical challenges must be overcome before such simulation platforms offer effective alternatives to more traditional training means. A primary challenge in developing effective virtual reality systems is designing the human interface hardware which allows rich sensory information to be presented to users in natural ways. When simulating a given manual procedure, task specific human interface requirements dictate task specific human interface hardware. The following paper explores the design of human interface hardware that satisfies the task specific requirements of virtual reality simulation of Endoscopic surgical procedures. Design parameters were derived through direct cadaver studies and interviews with surgeons. Final hardware design is presented.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Design intelligent wheelchair with ECG measurement and wireless transmission function.
Chou, Hsi-Chiang; Wang, Yi-Ming; Chang, Huai-Yuan
2015-01-01
The phenomenon of aging populations has produced widespread health awareness and magnified the need for improved medical quality and technologies. Statistics show that ischemic heart disease is the leading cause of death for older people and people with reduced mobility; therefore, wheelchairs have become their primary means of transport. Hence, an arrhythmia-detecting smart wheelchair was proposed in this study to provide real-time electrocardiography (ECG)-monitoring to patients with heart disease and reduced mobility. A self-developed, handheld ECG-sensing instrument was integrated with a wheelchair and a lab-written, arrhythmia-detecting program. The measured ECG data were transmitted through a Wi-Fi module and analyzed and diagnosed using the human-machine interface.
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Scanpath-based analysis of objects conspicuity in context of human vision physiology.
Augustyniak, Piotr
2007-01-01
This paper discusses principal aspects of objects conspicuity investigated with use of an eye tracker and interpreted on the background of human vision physiology. Proper management of objects conspicuity is fundamental in several leading edge applications in the information society like advertisement, web design, man-machine interfacing and ergonomics. Although some common rules of human perception are applied since centuries in the art, the interest of human perception process is motivated today by the need of gather and maintain the recipient attention by putting selected messages in front of the others. Our research uses the visual tasks methodology and series of progressively modified natural images. The modifying details were attributed by their size, color and position while the scanpath-derived gaze points confirmed or not the act of perception. The statistical analysis yielded the probability of detail perception and correlations with the attributes. This probability conforms to the knowledge about the retina anatomy and perception physiology, although we use noninvasive methods only.
Kant, Vivek
2017-03-01
Jens Rasmussen's contribution to the field of human factors and ergonomics has had a lasting impact. Six prominent interrelated themes can be extracted from his research between 1961 and 1986. These themes form the basis of an engineering epistemology which is best manifested by his abstraction hierarchy. Further, Rasmussen reformulated technical reliability using systems language to enable a proper human-machine fit. To understand the concept of human-machine fit, he included the operator as a central component in the system to enhance system safety. This change resulted in the application of a qualitative and categorical approach for human-machine interaction design. Finally, Rasmussen's insistence on a working philosophy of systems design as being a joint responsibility of operators and designers provided the basis for averting errors and ensuring safe and correct system functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1989-01-01
In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved to converge to the minimum of a cost function. Finally, simulations will show the effectiveness of a variety of search techniques for the intelligent machine.
Modelling of human-machine interaction in equipment design of manufacturing cells
NASA Astrophysics Data System (ADS)
Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming
2017-08-01
This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.
A Toolkit for Designing User Interfaces
1990-03-01
as the NPS IB can provide prototyping capability. Interface generators are available commercially for nearly every computing machine on the market ...structure which holds attributes of the message buffer window is shown in Figure 4.2. The variables nlines and nchars hold the number of lines in the...window its appearance of scrolling 46 /* define a type and structure for the message buffer */ struct messbuf( long nlines ; /* number of lines in the
[A novel serial port auto trigger system for MOSFET dose acquisition].
Luo, Guangwen; Qi, Zhenyu
2013-01-01
To synchronize the radiation of microSelectron-HDR (Nucletron afterloading machine) and measurement of MOSFET dose system, a trigger system based on interface circuit was designed and corresponding monitor and trigger program were developed on Qt platform. This interface and control system was tested and showed stable operate and reliable work. This adopted serial port detect technique may expand to trigger application of other medical devices.
Simulation of the human-telerobot interface
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1988-01-01
A part of NASA's Space Station will be a Flight Telerobotic Servicer (FTS) used to help assemble, service, and maintain the Space Station. Since the human operator will be required to control the FTS, the design of the human-telerobot interface must be optimized from a human factors perspective. Simulation has been used as an aid in the development of complex systems. Simulation has been especially useful when it has been applied to the development of complex systems. Simulation should ensure that the hardware and software components of the human-telerobot interface have been designed and selected so that the operator's capabilities and limitations have been accommodated for since this is a complex system where few direct comparisons to existent systems can be made. Three broad areas of the human-telerobot interface where simulation can be of assistance are described. The use of simulation not only can result in a well-designed human-telerobot interface, but also can be used to ensure that components have been selected to best meet system's goals, and for operator training.
Development of regenerative peripheral nerve interfaces for motor control of neuroprosthetic devices
NASA Astrophysics Data System (ADS)
Kemp, Stephen W. P.; Urbanchek, Melanie G.; Irwin, Zachary T.; Chestek, Cynthia A.; Cederna, Paul S.
2017-05-01
Traumatic peripheral nerve injuries suffered during amputation commonly results in debilitating neuropathic pain in the affected limb. Modern prosthetic technologies allow for intuitive, simultaneous control of multiple degrees of freedom. However, these state-of-the-art devices require separate, independent control signals for each degree of freedom, which is currently not possible. As a result, amputees reject up to 75% of myoelectric devices preferring instead to use body-powered artificial limbs which offer subtle sensory feedback. Without meaningful and intuitive sensory feedback, even the most advanced myoelectric prostheses remain insensate, burdensome, and are associated with enormous cognitive demand and mental fatigue. The ideal prosthetic device is one which is capable of providing intuitive somatosensory feedback essential for interaction with the environment. Critical to the design of such a bioprosthetic device is the development of a reliable biologic interface between human and machine. This ideal patient-prosthetic interface allows for transmission of both afferent somatosensory information and efferent motor signals for a closed-loop feedback system of neural control. Our lab has developed the Regenerative Peripheral Nerve Interface (RPNI) as a biologic nerve interface designed for stable integration of a prosthetic device with transected peripheral nerves in a residual limb. The RPNI is constructed by surgically implanting the distal end of a transected peripheral nerve into an autogenous muscle graft. Animal experiments in our lab have shown recording of motor signals from RPNI's implanted into both rodents and monkeys. Here, we achieve high amplitude EMG signals with a high signal to noise (SNR) ratio.
Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J
2012-01-01
Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.
The Development of Dispatcher Training Simulator in a Thermal Energy Generation System
NASA Astrophysics Data System (ADS)
Hakim, D. L.; Abdullah, A. G.; Mulyadi, Y.; Hasan, B.
2018-01-01
A dispatcher training simulator (DTS) is a real-time Human Machine Interface (HMI)-based control tool that is able to visualize industrial control system processes. The present study was aimed at developing a simulator tool for boilers in a thermal power station. The DTS prototype was designed using technical data of thermal power station boilers in Indonesia. It was then designed and implemented in Wonderware Intouch 10. The resulting simulator came with component drawing, animation, control display, alarm system, real-time trend, historical trend. This application used 26 tagnames and was equipped with a security system. The test showed that the principles of real-time control worked well. It is expected that this research could significantly contribute to the development of thermal power station, particularly in terms of its application as a training simulator for beginning dispatchers.
Design of virtual SCADA simulation system for pressurized water reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wijaksono, Umar, E-mail: umar.wijaksono@student.upi.edu; Abdullah, Ade Gafar; Hakim, Dadang Lukman
The Virtual SCADA system is a software-based Human-Machine Interface that can visualize the process of a plant. This paper described the results of the virtual SCADA system design that aims to recognize the principle of the Nuclear Power Plant type Pressurized Water Reactor. This simulation uses technical data of the Nuclear Power Plant Unit Olkiluoto 3 in Finland. This device was developed using Wonderware Intouch, which is equipped with manual books for each component, animation links, alarm systems, real time and historical trending, and security system. The results showed that in general this device can demonstrate clearly the principles ofmore » energy flow and energy conversion processes in Pressurized Water Reactors. This virtual SCADA simulation system can be used as instructional media to recognize the principle of Pressurized Water Reactor.« less
Formalisms for user interface specification and design
NASA Technical Reports Server (NTRS)
Auernheimer, Brent J.
1989-01-01
The application of formal methods to the specification and design of human-computer interfaces is described. A broad outline of human-computer interface problems, a description of the field of cognitive engineering and two relevant research results, the appropriateness of formal specification techniques, and potential NASA application areas are described.
Programmable Pulse-Position-Modulation Encoder
NASA Technical Reports Server (NTRS)
Zhu, David; Farr, William
2006-01-01
A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.
Errare machinale est: the use of error-related potentials in brain-machine interfaces
Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.
2014-01-01
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937
Designers' models of the human-computer interface
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Breedin, Sarah D.
1993-01-01
Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.
UAS Integration in the NAS Project: Part Task 6 V & V Simulation: Primary Results
NASA Technical Reports Server (NTRS)
Rorie, Conrad; Fern, Lisa; Shively, Jay; Santiago, Confesor
2016-01-01
This is a presentation of the preliminary results on final V and V (Verification and Validation) activity of [RTCA (Radio Technical Commission for Aeronautics)] SC (Special Committee)-228 DAA (Detect and Avoid) HMI (Human-Machine Interface) requirements for display alerting and guidance.
Advanced technologies for Mission Control Centers
NASA Technical Reports Server (NTRS)
Dalton, John T.; Hughes, Peter M.
1991-01-01
Advance technologies for Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: technology needs; current technology efforts at GSFC (human-machine interface development, object oriented software development, expert systems, knowledge-based software engineering environments, and high performance VLSI telemetry systems); and test beds.
Determining Value in Higher Education: The Future of Instructional Technology in a Wal-Mart Economy.
ERIC Educational Resources Information Center
Tremblay, Wilfred
1992-01-01
Discusses value and the economy and examines the changing definition of educational value regarding higher education. Trends in instructional technology resulting from changes in expected educational value are described, including resource sharing, specialization, market expansion, privatization, easier human-machine interfaces, feedback systems,…
Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.
Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E
2017-01-01
This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
NASA Astrophysics Data System (ADS)
Bjørner, Dines
Before software can be designed we must know its requirements. Before requirements can be expressed we must understand the domain. So it follows, from our dogma, that we must first establish precise descriptions of domains; then, from such descriptions, “derive” at least domain and interface requirements; and from those and machine requirements design the software, or, more generally, the computing systems.
The Human-Computer Interface and Information Literacy: Some Basics and Beyond.
ERIC Educational Resources Information Center
Church, Gary M.
1999-01-01
Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…
Intuitive wireless control of a robotic arm for people living with an upper body disability.
Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B
2015-08-01
Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.
Context in Models of Human-Machine Systems
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)
1998-01-01
All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
Human factors - Man-machine symbiosis in space
NASA Technical Reports Server (NTRS)
Brown, Jeri W.
1987-01-01
The relation between man and machine in space is studied. Early spaceflight and the goal of establishing a permanent space presence are described. The need to consider the physiological, psychological, and social integration of humans for each space mission is examined. Human factors must also be considered in the design of spacecraft. The effective utilization of man and machine capabilities, and research in anthropometry and biomechanics aimed at determining the limitations of spacecrews are discussed.
What makes an automated teller machine usable by blind users?
Manzke, J M; Egan, D H; Felix, D; Krueger, H
1998-07-01
Fifteen blind and sighted subjects, who featured as a control group for acceptance, were asked for their requirements for automated teller machines (ATMs). Both groups also tested the usability of a partially operational ATM mock-up. This machine was based on an existing cash dispenser, providing natural speech output, different function menus and different key arrangements. Performance and subjective evaluation data of blind and sighted subjects were collected. All blind subjects were able to operate the ATM successfully. The implemented speech output was the main usability factor for them. The different interface designs did not significantly affect performance and subjective evaluation. Nevertheless, design recommendations can be derived from the requirement assessment. The sighted subjects were rather open for design modifications, especially the implementation of speech output. However, there was also a mismatch of the requirements of the two subject groups, mainly concerning the key arrangement.
Designing the user interface: strategies for effective human-computer interaction
NASA Astrophysics Data System (ADS)
Shneiderman, B.
1998-03-01
In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.
NASA Astrophysics Data System (ADS)
Hiatt, Keith L.; Rash, Clarence E.
2011-06-01
Background: Army Aviators rely on the ANVIS for night operations. Human factors literature notes that the ANVIS man-machine interface results in reports of visual and spinal complaints. This is the first study that has looked at these issues in the much harsher combat environment. Last year, the authors reported on the statistically significant (p<0.01) increased complaints of visual discomfort, degraded visual cues, and incidence of static and dynamic visual illusions in the combat environment [Proc. SPIE, Vol. 7688, 76880G (2010)]. In this paper we present the findings regarding increased spinal complaints and other man-machine interface issues found in the combat environment. Methods: A survey was administered to Aircrew deployed in support of Operation Enduring Freedom (OEF). Results: 82 Aircrew (representing an aggregate of >89,000 flight hours of which >22,000 were with ANVIS) participated. Analysis demonstrated high complaints of almost all levels of back and neck pain. Additionally, the use of body armor and other Aviation Life Support Equipment (ALSE) caused significant ergonomic complaints when used with ANVIS. Conclusions: ANVIS use in a combat environment resulted in higher and different types of reports of spinal symptoms and other man-machine interface issues over what was previously reported. Data from this study may be more operationally relevant than that of the peacetime literature as it is derived from actual combat and not from training flights, and it may have important implications about making combat predictions based on performance in training scenarios. Notably, Aircrew remarked that they could not execute the mission without ANVIS and ALSE and accepted the degraded ergonomic environment.
Liu, Yuhao; Norton, James J S; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R; Liu, Hank; Yan, Lingqing; Tran, Phat L; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A
2016-11-01
Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics.
Liu, Yuhao; Norton, James J. S.; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R.; Liu, Hank; Yan, Lingqing; Tran, Phat L.; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A.; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J.; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A.
2016-01-01
Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics. PMID:28138529
The RACE (Research and Development in Advanced Technologies for Europe) Program: A 1989 Update
1989-12-15
Definition TV (HDTV) Expcrimcntal Usage . A......a.d..r Dist special 1081 - Broadband User Network Interface (BUNI)..................... 4 1082 ...develop man/machine which will provide a traffic analyzer and generator. interfaces that are consistent across a wide range of ap-plications. 1082 ... 1082 are to provide usage reference models for the different types of e Define IBC quality of service rquiremnts by usage design issue. It deals with
Designing the Instructional Interface.
ERIC Educational Resources Information Center
Lohr, L. L.
2000-01-01
Designing the instructional interface is a challenging endeavor requiring knowledge and skills in instructional and visual design, psychology, human-factors, ergonomic research, computer science, and editorial design. This paper describes the instructional interface, the challenges of its development, and an instructional systems approach to its…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoup, R.W.; Long, F.; Martin, T.H.
Sandia is developing PBFA-Z, a 20-MA driver for z-pinch experiments by replacing the water lines, insulator stack, and MITLs on PBFA II with new hardware. The design of the vacuum insulator stack was dictated by the drive voltage, the electric field stress and grading requirements, the water line and MITL interface requirements, and the machine operations and maintenance requirements. The insulator stack will consist of four separate modules, each of a different design because of different voltage drive and hardware interface requirements. The shape of the components in each module, i.e., grading rings, insulator rings, flux excluders, anode and cathodemore » conductors, and the design of the water line and MITL interfaces, were optimized by using the electrostatic analysis codes, ELECTRO and JASON. The time dependent performance of the insulator stack was evaluated using IVORY, a 2-D PIC code. This paper will describe the insulator stack design and present the results of the ELECTRO and IVORY analyses.« less
Rapid Prototyping and the Human Factors Engineering Process
2016-08-29
8217 without the effort and cost associated with conventional man -in-the-loop simulation. Advocates suggest that rapid prototyping is compatible with...use should be made of man -in-the loop simulation to supplement those analyses, but that such simulation is expensive and time consuming, precluding...conventional man -in-the- loop simulation. Rapid prototyping involves the construction and use of an executable model of a human-machine interface
Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J
2009-03-01
Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Customization of user interfaces to reduce errors and enhance user acceptance.
Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram
2014-03-01
Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Shirai, Yasuhiro; Minami, Kosuke; Nakanishi, Waka; Yonamine, Yusuke; Joachim, Christian; Ariga, Katsuhiko
2016-11-01
Nanomachine and molecular machines are state-of-the-art objects in current physics and chemistry. The operation and manufacturing of nanosize machines are top-level technologies that we have desired to accomplish for a long time. There have been extensive attempts to design and synthesize nanomachines. In this paper, we review the these attempts using the concept of nanoarchitectonics toward the design, synthesis, and testing of molecular machinery, especially at interfacial media. In the first half of this review, various historical attempts to design and prepare nanomachines are introduced as well as their operation mechanisms from their basic principles. Furthermore, in order to emphasize the importance and possibilities of this research field, we also give examples of two new challenging topics in the second half of this review: (i) a world wide nanocar race and (ii) new modes of nanomachine operation on water. The nanocar race event involves actual use of nanomachines and will take place in the near future, and nanomachine operation of a dynamic fluidic interface will enable future advances in nanomachine science and technology.
OTM Machine Acceptance: In the Arab Culture
NASA Astrophysics Data System (ADS)
Rashed, Abdullah; Santos, Henrique
Basically, neglecting the human factor is one of the main reasons for system failures or for technology rejection, even when important technologies are considered. Biometrics mostly have the characteristics needed for effortless acceptance, such as easiness and usefulness, that are essential pillars of acceptance models such as TAM (technology acceptance model). However, it should be investigated. Many studies have been carried out to research the issues of technology acceptance in different cultures, especially the western culture. Arabic culture lacks these types of studies with few publications in this field. This paper introduces a new biometric interface for ATM machines. This interface depends on a promising biometrics which is odour. To discover the acceptance of this biometrics, we distributed a questionnaire via a web site and called for participation in the Arab Area and found that most respondents would accept to use odour.
Human factor engineering based design and modernization of control rooms with new I and C systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larraz, J.; Rejas, L.; Ortega, F.
2012-07-01
Instrumentation and Control (I and C) systems of the latest nuclear power plants are based on the use of digital technology, distributed control systems and the integration of information in data networks (Distributed Control and Instrumentation Systems). This has a repercussion on Control Rooms (CRs), where the operations and monitoring interfaces correspond to these systems. These technologies are also used in modernizing I and C systems in currently operative nuclear power plants. The new interfaces provide additional capabilities for operation and supervision, as well as a high degree of flexibility, versatility and reliability. An example of this is the implementationmore » of solutions such as compact stations, high level supervision screens, overview displays, computerized procedures, new operational support systems or intelligent alarms processing systems in the modernized Man-Machine Interface (MMI). These changes in the MMI are accompanied by newly added Software (SW) controls and new solutions in automation. Tecnatom has been leading various projects in this area for several years, both in Asian countries and in the United States, using in all cases international standards from which Tecnatom own methodologies have been developed and optimized. The experience acquired in applying this methodology to the design of new control rooms is to a large extent applicable also to the modernization of current control rooms. An adequate design of the interface between the operator and the systems will facilitate safe operation, contribute to the prompt identification of problems and help in the distribution of tasks and communications between the different members of the operating shift. Based on Tecnatom experience in the field, this article presents the methodological approach used as well as the most relevant aspects of this kind of project. (authors)« less
Personalized keystroke dynamics for self-powered human--machine interfacing.
Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin
2015-01-27
The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.
CESAR research in intelligent machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1986-01-01
The Center for Engineering Systems Advanced Research (CESAR) was established in 1983 as a national center for multidisciplinary, long-range research and development in machine intelligence and advanced control theory for energy-related applications. Intelligent machines of interest here are artificially created operational systems that are capable of autonomous decision making and action. The initial emphasis for research is remote operations, with specific application to dexterous manipulation in unstructured dangerous environments where explosives, toxic chemicals, or radioactivity may be present, or in other environments with significant risk such as coal mining or oceanographic missions. Potential benefits include reduced risk to man inmore » hazardous situations, machine replication of scarce expertise, minimization of human error due to fear or fatigue, and enhanced capability using high resolution sensors and powerful computers. A CESAR goal is to explore the interface between the advanced teleoperation capability of today, and the autonomous machines of the future.« less
Jeyabalan, Vickneswaran; Samraj, Andrews; Loo, Chu Kiong
2010-10-01
Aiming at the implementation of brain-machine interfaces (BMI) for the aid of disabled people, this paper presents a system design for real-time communication between the BMI and programmable logic controllers (PLCs) to control an electrical actuator that could be used in devices to help the disabled. Motor imaginary signals extracted from the brain’s motor cortex using an electroencephalogram (EEG) were used as a control signal. The EEG signals were pre-processed by means of adaptive recursive band-pass filtrations (ARBF) and classified using simplified fuzzy adaptive resonance theory mapping (ARTMAP) in which the classified signals are then translated into control signals used for machine control via the PLC. A real-time test system was designed using MATLAB for signal processing, KEP-Ware V4 OLE for process control (OPC), a wireless local area network router, an Omron Sysmac CPM1 PLC and a 5 V/0.3A motor. This paper explains the signal processing techniques, the PLC's hardware configuration, OPC configuration and real-time data exchange between MATLAB and PLC using the MATLAB OPC toolbox. The test results indicate that the function of exchanging real-time data can be attained between the BMI and PLC through OPC server and proves that it is an effective and feasible method to be applied to devices such as wheelchairs or electronic equipment.
Toward FRP-Based Brain-Machine Interfaces—Single-Trial Classification of Fixation-Related Potentials
Finke, Andrea; Essig, Kai; Marchioro, Giuseppe; Ritter, Helge
2016-01-01
The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction. PMID:26812487
The Impact of New Guidance and Control Systems on Military Aircraft Cockpit Design.
1981-08-01
de r~duction des surfaces de planche de bord et de complexit6 des interfaces homme /machine darns les a~ronefs de combat A haute performance...taut remarquer que dana l ’&tat actuel do la technique, une machine de reconnaissance do parole n’a pas do performances en propre. Sea performances...L’organe principal du dialogue 6tant une console A tube cathodique et clavier. L I ___ 15-3 Le vocabulaire comportait 119 mots, extraits de
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
QUICK - An interactive software environment for engineering design
NASA Technical Reports Server (NTRS)
Skinner, David L.
1989-01-01
QUICK, an interactive software environment for engineering design, provides a programmable FORTRAN-like calculator interface to a wide range of data structures as well as both built-in and user created functions. QUICK also provides direct access to the operating systems of eight different machine architectures. The evolution of QUICK and a brief overview of the current version are presented.
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1993-01-01
Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.
A subthreshold aVLSI implementation of the Izhikevich simple neuron model.
Rangan, Venkat; Ghosh, Abhishek; Aparin, Vladimir; Cauwenberghs, Gert
2010-01-01
We present a circuit architecture for compact analog VLSI implementation of the Izhikevich neuron model, which efficiently describes a wide variety of neuron spiking and bursting dynamics using two state variables and four adjustable parameters. Log-domain circuit design utilizing MOS transistors in subthreshold results in high energy efficiency, with less than 1pJ of energy consumed per spike. We also discuss the effects of parameter variations on the dynamics of the equations, and present simulation results that replicate several types of neural dynamics. The low power operation and compact analog VLSI realization make the architecture suitable for human-machine interface applications in neural prostheses and implantable bioelectronics, as well as large-scale neural emulation tools for computational neuroscience.
Intelligent man/machine interfaces on the space station
NASA Technical Reports Server (NTRS)
Daughtrey, Rodney S.
1987-01-01
Some important topics in the development of good, intelligent, usable man/machine interfaces for the Space Station are discussed. These computer interfaces should adhere strictly to three concepts or doctrines: generality, simplicity, and elegance. The motivation for natural language interfaces and their use and value on the Space Station, both now and in the future, are discussed.
Improving air traffic control: Proving new tools or approving the joint human-machine system?
NASA Technical Reports Server (NTRS)
Gaillard, Irene; Leroux, Marcel
1994-01-01
From the description of a field problem (i.e., designing decision aids for air traffic controllers), this paper points out how a cognitive engineering approach provides the milestones for the evaluation of future joint human-machine systems.
Some Ideas on the Microcomputer and the Information/Knowledge Workstation.
ERIC Educational Resources Information Center
Boon, J. A.; Pienaar, H.
1989-01-01
Identifies the optimal goal of knowledge workstations as the harmony of technology and human decision-making behaviors. Two types of decision-making processes are described and the application of each type to experimental and/or operational situations is discussed. Suggestions for technical solutions to machine-user interfaces are then offered.…
An implantable integrated low-power amplifier-microelectrode array for Brain-Machine Interfaces.
Patrick, Erin; Sankar, Viswanath; Rowe, William; Sanchez, Justin C; Nishida, Toshikazu
2010-01-01
One of the important challenges in designing Brain-Machine Interfaces (BMI) is to build implantable systems that have the ability to reliably process the activity of large ensembles of cortical neurons. In this paper, we report the design, fabrication, and testing of a polyimide-based microelectrode array integrated with a low-power amplifier as part of the Florida Wireless Integrated Recording Electrode (FWIRE) project at the University of Florida developing a fully implantable neural recording system for BMI applications. The electrode array was fabricated using planar micromachining MEMS processes and hybrid packaged with the amplifier die using a flip-chip bonding technique. The system was tested both on bench and in-vivo. Acute and chronic neural recordings were obtained from a rodent for a period of 42 days. The electrode-amplifier performance was analyzed over the chronic recording period with the observation of a noise floor of 4.5 microVrms, and an average signal-to-noise ratio of 3.8.
1988-09-01
Group Subgroup Command and control; Computational linguistics; expert system voice recognition; man- machine interface; U.S. Government 19 Abstract...simulates the characteristics of FRESH on a smaller scale. This study assisted NOSC in developing a voice-recognition, man- machine interface that could...scale. This study assisted NOSC in developing a voice-recogni- tion, man- machine interface that could be used with TONE and upgraded at a later date
Manual discrimination of force
NASA Technical Reports Server (NTRS)
Pang, Xiao-Dong; Tan, HONG-Z.; Durlach, Nathaniel I.
1991-01-01
Optimal design of human-machine interfaces for teleoperators and virtual-environment systems which involve the tactual and kinesthetic modalities requires knowledge of the human's resolving power in these modalities. The resolution of the interface should be appropriately matched to that of the human operator. We report some preliminary results on the ability of the human hand to distinguish small differences in force under a variety of conditions. Experiments were conducted on force discrimination with the thumb pushing an interface that exerts a constant force over the pushing distance and the index finger pressing against a fixed support. The dependence of the sensitivity index d' on force increment can be fit by a straight line through the origin and the just-noticeable difference (JND) in force can thus be described by the inverse of the slope of this line. The receiver operating characteristic (ROC) was measured by varying the a priori probabilities of the two alternatives, reference force and reference force plus an increment, in one-interval, two-alternative, forced-choice experiments. When plotted on normal deviate coordinates, the ROC's were roughly straight lines of unit slope, thus supporting the assumption of equal-variance normal distributions and the use of the conventional d' measure. The JND was roughly 6-8 percent for reference force ranging from 2.5 to 10 newtons, pushing distance from 5 to 30 mm, and initial finger-span from 45 to 125 mm. Also, the JND remained the same when the subjects were instructed to change the average speed of pushing from 23 to 153 mm/sec. The pushing was terminated by reaching either a wall or a well, and the JND's were essentially the same in both cases.
The Formal Specification of a Visual display Device: Design and Implementation.
1985-06-01
The use of these data structures with their defined operations, give the programmer a very powerful instructions set. Like the DPU code generator in...which any AM hosted machine could faithfully display. 27 In- general , most applications have no need to create images from a data structure representing...formation of standard functional interfaces to these resources. OS’s generally do not provide a functional interface to either the processor or the display2
NASA Technical Reports Server (NTRS)
Banda, Carolyn; Bushnell, David; Chen, Scott; Chiu, Alex; Constantine, Betsy; Murray, Jerry; Neukom, Christian; Prevost, Michael; Shankar, Renuka; Staveland, Lowell
1991-01-01
The Man-Machine Integration Design and Analysis System (MIDAS) is an integrated suite of software components that constitutes a prototype workstation to aid designers in applying human factors principles to the design of complex human-machine systems. MIDAS is intended to be used at the very early stages of conceptual design to provide an environment wherein designers can use computational representations of the crew station and operator, instead of hardware simulators and man-in-the-loop studies, to discover problems and ask 'what if' questions regarding the projected mission, equipment, and environment. This document is the Software Product Specification for MIDAS. Introductory descriptions of the processing requirements, hardware/software environment, structure, I/O, and control are given in the main body of the document for the overall MIDAS system, with detailed discussion of the individual modules included in Annexes A-J.
Enhancing the Human Factors Engineering Role in an Austere Fiscal Environment
NASA Technical Reports Server (NTRS)
Stokes, Jack W.
2003-01-01
An austere fiscal environment in the aerospace community creates pressures to reduce program costs, often minimizing or sometimes even deleting the human interface requirements from the design process. With an assumption that the flight crew can recover real time from a poorly human factored space vehicle design, the classical crew interface requirements have been either not included in the design or not properly funded, though carried as requirements. Cost cuts have also affected quality of retained human factors engineering personnel. In response to this concern, planning is ongoing to correct the acting issues. Herein are techniques for ensuring that human interface requirements are integrated into a flight design, from proposal through verification and launch activation. This includes human factors requirements refinement and consolidation across flight programs; keyword phrases in the proposals; closer ties with systems engineering and other classical disciplines; early planning for crew-interface verification; and an Agency integrated human factors verification program, under the One NASA theme. Importance is given to communication within the aerospace human factors discipline, and utilizing the strengths of all government, industry, and academic human factors organizations in an unified research and engineering approach. A list of recommendations and concerns are provided in closing.
Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.
2010-01-01
From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.
2016-12-01
Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.
Humans, Intelligent Technology, and Their Interface: A Study of Brown’s Point
2017-12-01
known about the role of drivers. When combining humans and intelligent technology (machines), such as self-driving vehicles, how people think about...disrupt the entire transportation industry and potentially change how society moves people and goods. The findings of the investigation are likely...The power of suggestion is very important to understand and consider when framing and bringing meaning to new technology, which points to looking at
Design and fabrication of a freeform phase plate for high-order ocular aberration correction
NASA Astrophysics Data System (ADS)
Yi, Allen Y.; Raasch, Thomas W.
2005-11-01
In recent years it has become possible to measure and in some instances to correct the high-order aberrations of human eyes. We have investigated the correction of wavefront error of human eyes by using phase plates designed to compensate for that error. The wavefront aberrations of the four eyes of two subjects were experimentally determined, and compensating phase plates were machined with an ultraprecision diamond-turning machine equipped with four independent axes. A slow-tool servo freeform trajectory was developed for the machine tool path. The machined phase-correction plates were measured and compared with the original design values to validate the process. The position of the phase-plate relative to the pupil is discussed. The practical utility of this mode of aberration correction was investigated with visual acuity testing. The results are consistent with the potential benefit of aberration correction but also underscore the critical positioning requirements of this mode of aberration correction. This process is described in detail from optical measurements, through machining process design and development, to final results.
The ergonomics approach for thin film transistor-liquid crystal display manufacturing process.
Lu, Chih-Wei; Yao, Chia-Chun; Kuo, Chein-Wen
2012-01-01
The thin film transistor-liquid crystal display (TFT-LCD) has been used all over the world. Although the manufacture process of TFT-LCD was highly automated, employees are hired to do manual job in module assembly process. The operators may have high risk of musculoskeletal disorders because of the long work hours and the repetitive activities in an unfitted work station. The tools of this study were questionnaire, checklist and to evaluate the work place design. The result shows that the participants reported high musculoskeletal disorder symptoms in shoulder (59.8%), neck (49.5%), wrist (39.5%), and upper back (30.6%). And, to reduce the ergonomic risk factors, revising the height of the work benches, chairs and redesigning the truck to decrease the chance of unsuitable positions were recommended and to reduce other ergonomics hazards and seta good human machine interface and appropriate job design.
Series Elastic Actuators for legged robots
NASA Astrophysics Data System (ADS)
Pratt, Jerry E.; Krupp, Benjamin T.
2004-09-01
Series Elastic Actuators provide many benefits in force control of robots in unconstrained environments. These benefits include high force fidelity, extremely low impedance, low friction, and good force control bandwidth. Series Elastic Actuators employ a novel mechanical design architecture which goes against the common machine design principal of "stiffer is better." A compliant element is placed between the gear train and driven load to intentionally reduce the stiffness of the actuator. A position sensor measures the deflection, and the force output is accurately calculated using Hooke"s Law (F=Kx). A control loop then servos the actuator to the desired output force. The resulting actuator has inherent shock tolerance, high force fidelity and extremely low impedance. These characteristics are desirable in many applications including legged robots, exoskeletons for human performance amplification, robotic arms, haptic interfaces, and adaptive suspensions. We describe several variations of Series Elastic Actuators that have been developed using both electric and hydraulic components.
Space Station data management system architecture
NASA Technical Reports Server (NTRS)
Mallary, William E.; Whitelaw, Virginia A.
1987-01-01
Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.
Designing a SCADA system simulator for fast breeder reactor
NASA Astrophysics Data System (ADS)
Nugraha, E.; Abdullah, A. G.; Hakim, D. L.
2016-04-01
SCADA (Supervisory Control and Data Acquisition) system simulator is a Human Machine Interface-based software that is able to visualize the process of a plant. This study describes the results of the process of designing a SCADA system simulator that aims to facilitate the operator in monitoring, controlling, handling the alarm, accessing historical data and historical trend in Nuclear Power Plant (NPP) type Fast Breeder Reactor (FBR). This research used simulation to simulate NPP type FBR Kalpakkam in India. This simulator was developed using Wonderware Intouch software 10 and is equipped with main menu, plant overview, area graphics, control display, set point display, alarm system, real-time trending, historical trending and security system. This simulator can properly simulate the principle of energy flow and energy conversion process on NPP type FBR. This SCADA system simulator can be used as training media for NPP type FBR prospective operators.
Brain-machine interfaces for controlling lower-limb powered robotic systems.
He, Yongtian; Eguren, David; Azorín, José M; Grossman, Robert G; Luu, Trieu Phat; Contreras-Vidal, Jose L
2018-04-01
Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Brain-machine interfaces for controlling lower-limb powered robotic systems
NASA Astrophysics Data System (ADS)
He, Yongtian; Eguren, David; Azorín, José M.; Grossman, Robert G.; Phat Luu, Trieu; Contreras-Vidal, Jose L.
2018-04-01
Objective. Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. Approach. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Main results. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. Significance. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Hybrid soft computing systems for electromyographic signals analysis: a review.
Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates
2014-02-03
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.
Hybrid soft computing systems for electromyographic signals analysis: a review
2014-01-01
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979
Being human in a global age of technology.
Whelton, Beverly J B
2016-01-01
This philosophical enquiry considers the impact of a global world view and technology on the meaning of being human. The global vision increases our awareness of the common bond between all humans, while technology tends to separate us from an understanding of ourselves as human persons. We review some advances in connecting as community within our world, and many examples of technological changes. This review is not exhaustive. The focus is to understand enough changes to think through the possibility of healthcare professionals becoming cyborgs, human-machine units that are subsequently neither human and nor machine. It is seen that human technology interfaces are a different way of interacting but do not change what it is to be human in our rational capacities of providing meaningful speech and freely chosen actions. In the highly technical environment of the ICU, expert nurses work in harmony with both the technical equipment and the patient. We used Heidegger to consider the nature of equipment, and Descartes to explore unique human capacities. Aristotle, Wallace, Sokolowski, and Clarke provide a summary of humanity as substantial and relational. © 2015 John Wiley & Sons Ltd.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330
Nanoscale wear and machining behavior of nanolayer interfaces.
Nie, Xueyuan; Zhang, Peng; Weiner, Anita M; Cheng, Yang-Tse
2005-10-01
An atomic force microscope was used to subnanometer incise a nanomultilayer to consequently expose individual nanolayers and interfaces on which sliding and scanning nanowear/machining have been performed. The letter reports the first observation on the nanoscale where (i) atomic debris forms in a collective manner, most-likely by deformation and rupture of atomic bonds, and (ii) the nanolayer interfaces possess a much higher wear resistance (desired for nanomachines) or lower machinability (not desired for nanomachining) than the layers.
The NASA Lewis large wind turbine program
NASA Technical Reports Server (NTRS)
Thomas, R. L.; Baldwin, D. H.
1981-01-01
The program is directed toward development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generation systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Advances are made by gaining a better understanding of the system design drivers, improvements in the analytical design tools, verification of design methods with operating field data, and the incorporation of new technology and innovative designs. An overview of the program activities is presented and includes results from the first and second generation field machines (Mod-OA, -1, and -2), the design phase of the third generation wind turbine (Mod-5) and the advanced technology projects. Also included is the status of the Department of Interior WTS-4 machine.
Conformal Predictions in Multimedia Pattern Recognition
ERIC Educational Resources Information Center
Nallure Balasubramanian, Vineeth
2010-01-01
The fields of pattern recognition and machine learning are on a fundamental quest to design systems that can learn the way humans do. One important aspect of human intelligence that has so far not been given sufficient attention is the capability of humans to express when they are certain about a decision, or when they are not. Machine learning…
SpaceBuoy: A University Nanosat Space Weather Mission
2012-03-26
for all four-side panels. One design and one machine set-up allows a CNC mill to build them almost automatically. Lessons learned from components...in a dual probe configuration, for in situ plasma density) and interfacing with the spacecraft has been completed. Engineering development is
NASA Astrophysics Data System (ADS)
Pekedis, Mahmut; Mascerañas, David; Turan, Gursoy; Ercan, Emre; Farrar, Charles R.; Yildiz, Hasan
2015-08-01
For the last two decades, developments in damage detection algorithms have greatly increased the potential for autonomous decisions about structural health. However, we are still struggling to build autonomous tools that can match the ability of a human to detect and localize the quantity of damage in structures. Therefore, there is a growing interest in merging the computational and cognitive concepts to improve the solution of structural health monitoring (SHM). The main object of this research is to apply the human-machine cooperative approach on a tower structure to detect damage. The cooperation approach includes haptic tools to create an appropriate collaboration between SHM sensor networks, statistical compression techniques and humans. Damage simulation in the structure is conducted by releasing some of the bolt loads. Accelerometers are bonded to various locations of the tower members to acquire the dynamic response of the structure. The obtained accelerometer results are encoded in three different ways to represent them as a haptic stimulus for the human subjects. Then, the participants are subjected to each of these stimuli to detect the bolt loosened damage in the tower. Results obtained from the human-machine cooperation demonstrate that the human subjects were able to recognize the damage with an accuracy of 88 ± 20.21% and response time of 5.87 ± 2.33 s. As a result, it is concluded that the currently developed human-machine cooperation SHM may provide a useful framework to interact with abstract entities such as data from a sensor network.
Physiological properties of brain-machine interface input signals.
Slutzky, Marc W; Flint, Robert D
2017-08-01
Brain-machine interfaces (BMIs), also called brain-computer interfaces (BCIs), decode neural signals and use them to control some type of external device. Despite many experimental successes and terrific demonstrations in animals and humans, a high-performance, clinically viable device has not yet been developed for widespread usage. There are many factors that impact clinical viability and BMI performance. Arguably, the first of these is the selection of brain signals used to control BMIs. In this review, we summarize the physiological characteristics and performance-including movement-related information, longevity, and stability-of multiple types of input signals that have been used in invasive BMIs to date. These include intracortical spikes as well as field potentials obtained inside the cortex, at the surface of the cortex (electrocorticography), and at the surface of the dura mater (epidural signals). We also discuss the potential for future enhancements in input signal performance, both by improving hardware and by leveraging the knowledge of the physiological characteristics of these signals to improve decoding and stability. Copyright © 2017 the American Physiological Society.
Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra
2017-05-14
To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treatingmore » a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.« less
Self-assembling fluidic machines
NASA Astrophysics Data System (ADS)
Grzybowski, Bartosz A.; Radkowski, Michal; Campbell, Christopher J.; Lee, Jessamine Ng; Whitesides, George M.
2004-03-01
This letter describes dynamic self-assembly of two-component rotors floating at the interface between liquid and air into simple, reconfigurable mechanical systems ("machines"). The rotors are powered by an external, rotating magnetic field, and their positions within the interface are controlled by: (i) repulsive hydrodynamic interactions between them and (ii) by localized magnetic fields produced by an array of small electromagnets located below the plane of the interface. The mechanical functions of the machines depend on the spatiotemporal sequence of activation of the electromagnets.
Designing effective human-automation-plant interfaces: a control-theoretic perspective.
Jamieson, Greg A; Vicente, Kim J
2005-01-01
In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.
On Abstractions and Simplifications in the Design of Human-Automation Interfaces
NASA Technical Reports Server (NTRS)
Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)
2001-01-01
This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.
On Abstractions and Simplifications in the Design of Human-Automation Interfaces
NASA Technical Reports Server (NTRS)
Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)
2002-01-01
This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.
The use of graphics in the design of the human-telerobot interface
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1989-01-01
The Man-Systems Telerobotics Laboratory (MSTL) of NASA's Johnson Space Center employs computer graphics tools in their design and evaluation of the Flight Telerobotic Servicer (FTS) human/telerobot interface on the Shuttle and on the Space Station. It has been determined by the MSTL that the use of computer graphics can promote more expedient and less costly design endeavors. Several specific examples of computer graphics applied to the FTS user interface by the MSTL are described.
Final Report of Work Done on Contract NONR-4010(03).
ERIC Educational Resources Information Center
Chapanis, Alphonse
The 24 papers listed report the findings of a study funded by the Office of Naval Research. The study concentrated on the sensory and cognitive factors in man-machine interfaces. The papers are categorized into three groups: perception studies, human engineering studies, and methodological papers. A brief summary of the most noteworthy findings in…
Reducing lumber thickness variation using real-time statistical process control
Thomas M. Young; Brian H. Bond; Jan Wiedenbeck
2002-01-01
A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...
ERIC Educational Resources Information Center
Hert, Carol A.; Nilan, Michael S.
1991-01-01
Presents preliminary data that characterizes the relationship between what users say they are trying to accomplish when using an online public access catalog (OPAC) and their perceptions of what input to give the system. Human-machine interaction is discussed, and appropriate methods for evaluating information retrieval systems are considered. (18…
The Technology Review 10: Emerging Technologies that Will Change the World.
ERIC Educational Resources Information Center
Technology Review, 2001
2001-01-01
Identifies 10 emerging areas of technology that will soon have a profound impact on the economy and on how people live and work: brain-machine interfaces; flexible transistors; data mining; digital rights management; biometrics; natural language processing; microphotonics; untangling code; robot design; and microfluidics. In each area, one…
1990-10-01
to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both
Massachusetts Institute of Technology Consortium Agreement
1999-03-01
In this, our second progress report of the Phase Two Home Automation and Healthcare Consortium at the Brit and Alex d’Arbeloff Laboratory for...Covered here are the diverse fields of home automation and healthcare research, ranging from human modeling, patient monitoring, and diagnosis to new...sensors and actuators, physical aids, human-machine interface and home automation infrastructure. These results will be presented at the upcoming General Assembly of the Consortium held on October 27-October 30, 1998 at MIT.
Flexible and stretchable electronics for biointegrated devices.
Kim, Dae-Hyeong; Ghaffari, Roozbeh; Lu, Nanshu; Rogers, John A
2012-01-01
Advances in materials, mechanics, and manufacturing now allow construction of high-quality electronics and optoelectronics in forms that can readily integrate with the soft, curvilinear, and time-dynamic surfaces of the human body. The resulting capabilities create new opportunities for studying disease states, improving surgical procedures, monitoring health/wellness, establishing human-machine interfaces, and performing other functions. This review summarizes these technologies and illustrates their use in forms integrated with the brain, the heart, and the skin.
Military Medical Decision Support for Homeland Defense During Emergency
2004-12-01
abstraction hierarchy, three levels of information requirement for designing emergency training interface are recognized. These are epistemological ...support human decision making process is considered to be decision-centric. A typical decision-centric interface is supported by at least four design ... Designing Emergency Training Interface ......................................................................................... 5 Epistemological
An extremely lightweight fingernail worn prosthetic interface device
NASA Astrophysics Data System (ADS)
Yetkin, Oguz; Ahluwalia, Simranjit; Silva, Dinithi; Kasi-Okonye, Isioma; Volker, Rachael; Baptist, Joshua R.; Popa, Dan O.
2016-05-01
Upper limb prosthetics are currently operated using several electromyography sensors mounted on an amputee's residual limb. In order for any prosthetic driving interface to be widely adopted, it needs to be responsive, lightweight, and out of the way when not being used. In this paper we discuss the possibility of replacing such electrodes with fingernail optical sensor systems mounted on the sound limb. We present a prototype device that can detect pinch gestures and communicate with the prosthetic system. The device detects the relative position of fingers to each other by measuring light transmitted via tissue. Applications are not limited to prosthetic control, but can be extended to other human-machine interfaces.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
DOE Office of Scientific and Technical Information (OSTI.GOV)
Truitt, R.W.
1997-10-22
This report provides an independent evaluation of information for a Windows based Human Machine Interface (HMI) to replace the existing DOS based Iconics HMI currently used in the Data Acquisition and Control System (DACS) used at Tank 241-SY-101. A fundamental reason for this evaluation is because of the difficulty of maintaining the system with obsolete, unsupported software. The DACS uses a software operator interface (Genesis for DOS HMI) that is no longer supported by its manufacturer, Iconics. In addition to its obsolescence, it is complex and difficult to train additional personnel on. The FY 1997 budget allocated $40K for phasemore » 1 of a software/hardware upgrade that would have allowed the old DOS based system to be replaced by a current Windows based system. Unfortunately, budget constraints during FY 1997 has prompted deferral of the upgrade. The upgrade needs to be performed at the earliest possible time, before other failures render the system useless. Once completed, the upgrade could alleviate other concerns: spare pump software may be able to be incorporated into the same software as the existing pump, thereby eliminating the parallel path dilemma; and the newer, less complex software should expedite training of future personnel, and in the process, require that less technical time be required to maintain the system.« less
Highly Stretchable Core-Sheath Fibers via Wet-Spinning for Wearable Strain Sensors.
Tang, Zhenhua; Jia, Shuhai; Wang, Fei; Bian, Changsheng; Chen, Yuyu; Wang, Yonglin; Li, Bo
2018-02-21
Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.
A Discussion of Possibility of Reinforcement Learning Using Event-Related Potential in BCI
NASA Astrophysics Data System (ADS)
Yamagishi, Yuya; Tsubone, Tadashi; Wada, Yasuhiro
Recently, Brain computer interface (BCI) which is a direct connecting pathway an external device such as a computer or a robot and a human brain have gotten a lot of attention. Since BCI can control the machines as robots by using the brain activity without using the voluntary muscle, the BCI may become a useful communication tool for handicapped persons, for instance, amyotrophic lateral sclerosis patients. However, in order to realize the BCI system which can perform precise tasks on various environments, it is necessary to design the control rules to adapt to the dynamic environments. Reinforcement learning is one approach of the design of the control rule. If this reinforcement leaning can be performed by the brain activity, it leads to the attainment of BCI that has general versatility. In this research, we paid attention to P300 of event-related potential as an alternative signal of the reward of reinforcement learning. We discriminated between the success and the failure trials from P300 of the EEG of the single trial by using the proposed discrimination algorithm based on Support vector machine. The possibility of reinforcement learning was examined from the viewpoint of the number of discriminated trials. It was shown that there was a possibility to be able to learn in most subjects.
Ant-Based Cyber Defense (also known as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn Fink, PNNL
2015-09-29
ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.
Developing and Validating Practical Eye Metrics for the Sense-Assess-Augment Framework
2015-09-29
Sense-Assess-Augment ( SAA ) Framework. To better close the loop between the human and machine teammates AFRL’s Human Performance Wing and Human...Sense-Assess-Augment ( SAA ) framework, which is designed to sense a suite of physiological signals from the operator, use these signals to assess the...to use psychophysiological measures to improve human-machine teamwork (such as Biocybernetics or Augmented Cognition) the AFRL- SAA research program
What Machines Need to Learn to Support Human Problem-Solving
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
In the development of intelligent systems that interact with humans, there is often confusion between how the system functions with respect to the humans it interacts with and how it interfaces with those humans. The former is a much deeper challenge than the latter it requires a system-level understanding of evolving human roles as well as an understanding of what humans need to know (and when) in order to perform their tasks. This talk will focus on some of the challenges in getting this right as well as on the type of research and development that results in successful human-autonomy teaming. Brief Bio: Dr. Alonso Vera is Chief of the Human Systems Integration Division at NASA Ames Research Center. His expertise is in human-computer interaction, information systems, artificial intelligence, and computational human performance modeling. He has led the design, development and deployment of mission software systems across NASA robotic and human space flight missions, including Mars Exploration Rovers, Phoenix Mars Lander, ISS, Constellation, and Exploration Systems. Dr. Vera received a Bachelor of Science with First Class Honors from McGill University in 1985 and a Ph.D. from Cornell University in 1991. He went on to a Post-Doctoral Fellowship in the School of Computer Science at Carnegie Mellon University from 1990-93.
1987-12-01
Normally, the system is decomposed into manageable parts with accurately defined interfaces. By rigidly controlling this process, aerospace companies have...Reference A CHANGE IN SYSTEM DESIGN EMPHASIS: FROM MACHINE TO MAN by M.L.Metersky and J.L.Ryder 16 SESSION I1 - MANAGING THE FUl URE SYSTEM DESIGN...PROCESS MANAGING ADVANCED AVIONIC SYSTEM DESIGN by P.Simons 17 ERGONOMIE PSYCHOSENSORIELLE DES COCKPITS, INTERET DES SYSTEMES INFORMATIQUES INTELLIGENTS
NASA Technical Reports Server (NTRS)
Booher, Cletis R.; Goldsberry, Betty S.
1994-01-01
During the second half of the 1980s, a document was created by the National Aeronautics and Space Administration (NASA) to aid in the application of good human factors engineering and human interface practices to the design and development of hardware and systems for use in all United States manned space flight programs. This comprehensive document, known as NASA-STD-3000, the Man-Systems Integration Standards (MSIS), attempts to address, from a human factors engineering/human interface standpoint, all of the various types of equipment with which manned space flight crew members must deal. Basically, all of the human interface situations addressed in the MSIS are present in terrestrially based systems also. The premise of this paper is that, starting with this already created standard, comprehensive documents addressing human factors engineering and human interface concerns could be developed to aid in the design of almost any type of equipment or system which humans interface with in any terrestrial environment. Utilizing the systems and processes currently in place in the MSIS Development Facility at the Johnson Space Center in Houston, TX, any number of MSIS volumes addressing the human factors / human interface needs of any terrestrially based (or, for that matter, airborne) system could be created.
A Distributed Control System Prototyping Environment to Support Control Room Modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lew, Roger Thomas; Boring, Ronald Laurids; Ulrich, Thomas Anthony
Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, however the set of tools for developing and designing HMIs is still in its infancy. Here we propose a rapid prototyping approach for integrating proposed HMIs into their native environments before a design is finalized. This approach allows researchers and developers tomore » test design ideas and eliminate design flaws prior to fully developing the new system. We illustrate this approach with four prototype designs developed using Microsoft’s Windows Presentation Foundation (WPF). One example is integrated into a microworld environment to test the functionality of the design and identify the optimal level of automation for a new system in a nuclear power plant. The other three examples are integrated into a full-scale, glasstop digital simulator of a nuclear power plant. One example demonstrates the capabilities of next generation control concepts; another aims to expand the current state of the art; lastly, an HMI prototype was developed as a test platform for a new control system currently in development at U.S. nuclear power plants. WPF possesses several characteristics that make it well suited to HMI design. It provides a tremendous amount of flexibility, agility, robustness, and extensibility. Distributed control system (DCS) specific environments tend to focus on the safety and reliability requirements for real-world interfaces and consequently have less emphasis on providing functionality to support novel interaction paradigms. Because of WPF’s large user-base, Microsoft can provide an extremely mature tool. Within process control applications,WPF is platform independent and can communicate with popular full-scope process control simulator vendor plant models and DCS platforms.« less
Sutherland, Garnette R; Wolfsberger, Stefan; Lama, Sanju; Zarei-nia, Kourosh
2013-01-01
Intraoperative imaging disrupts the rhythm of surgery despite providing an excellent opportunity for surgical monitoring and assessment. To allow surgery within real-time images, neuroArm, a teleoperated surgical robotic system, was conceptualized. The objective was to design and manufacture a magnetic resonance-compatible robot with a human-machine interface that could reproduce some of the sight, sound, and touch of surgery at a remote workstation. University of Calgary researchers worked with MacDonald, Dettwiler and Associates engineers to produce a requirements document, preliminary design review, and critical design review, followed by the manufacture, preclinical testing, and clinical integration of neuroArm. During the preliminary design review, the scope of the neuroArm project changed to performing microsurgery outside the magnet and stereotaxy inside the bore. neuroArm was successfully manufactured and installed in an intraoperative magnetic resonance imaging operating room. neuroArm was clinically integrated into 35 cases in a graded fashion. As a result of this experience, neuroArm II is in development, and advances in technology will allow microsurgery within the bore of the magnet. neuroArm represents a successful interdisciplinary collaboration. It has positive implications for the future of robotic technology in neurosurgery in that the precision and accuracy of robots will continue to augment human capability.
Introduction of knowledge bases in patient's data management system: role of the user interface.
Chambrin, M C; Ravaux, P; Jaborska, A; Beugnet, C; Lestavel, P; Chopin, C; Boniface, M
1995-02-01
As the number of signals and data to be handled grows in intensive care unit, it is necessary to design more powerful computing systems that integrate and summarize all this information. The manual input of data as e.g. clinical signs and drug prescription and the synthetic representation of these data requires an ever more sophisticated user interface. The introduction of knowledge bases in the data management allows to conceive contextual interfaces. The objective of this paper is to show the importance of the design of the user interface, in the daily use of clinical information system. Then we describe a methodology that uses the man-machine interaction to capture the clinician knowledge during the clinical practice. The different steps are the audit of the user's actions, the elaboration of statistic models allowing the definition of new knowledge, and the validation that is performed before complete integration. A part of this knowledge can be used to improve the user interface. Finally, we describe the implementation of these concepts on a UNIX platform using OSF/MOTIF graphical interface.
A Prototype SSVEP Based Real Time BCI Gaming System
Martišius, Ignas
2016-01-01
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414
Design of cylindrical pipe automatic welding control system based on STM32
NASA Astrophysics Data System (ADS)
Chen, Shuaishuai; Shen, Weicong
2018-04-01
The development of modern economy makes the demand for pipeline construction and construction rapidly increasing, and the pipeline welding has become an important link in pipeline construction. At present, there are still a large number of using of manual welding methods at home and abroad, and field pipe welding especially lacks miniature and portable automatic welding equipment. An automated welding system consists of a control system, which consisting of a lower computer control panel and a host computer operating interface, as well as automatic welding machine mechanisms and welding power systems in coordination with the control system. In this paper, a new control system of automatic pipe welding based on the control panel of the lower computer and the interface of the host computer is proposed, which has many advantages over the traditional automatic welding machine.
A Prototype SSVEP Based Real Time BCI Gaming System.
Martišius, Ignas; Damaševičius, Robertas
2016-01-01
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.