Sample records for control user interface

  1. Stand-alone digital data storage control system including user control interface

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)

    1994-01-01

    A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.

  2. Human-computer interface including haptically controlled interactions

    DOEpatents

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  3. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  4. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  5. Improved usability of a multi-infusion setup using a centralized control interface: A task-based usability test

    PubMed Central

    Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J.; Nijsten, Maarten W.

    2017-01-01

    The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface. PMID:28800617

  6. Improved usability of a multi-infusion setup using a centralized control interface: A task-based usability test.

    PubMed

    Doesburg, Frank; Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J; Nijsten, Maarten W

    2017-01-01

    The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface.

  7. Command and control interfaces for advanced neuroprosthetic applications.

    PubMed

    Scott, T R; Haugland, M

    2001-10-01

    Command and control interfaces permit the intention and situation of the user to influence the operation of the neural prosthesis. The wishes of the user are communicated via command interfaces to the neural prosthesis and the situation of the user by feedback control interfaces. Both these interfaces have been reviewed separately and are discussed in light of the current state of the art and projections for the future. It is apparent that as system functional complexity increases, the need for simpler command interfaces will increase. Such systems will demand more information to function effectively in order not to unreasonably increase user attention overhead. This will increase the need for bioelectric and biomechanical signals in a comprehensible form via elegant feedback control interfaces. Implementing such systems will also increase the computational demand on such neural prostheses.

  8. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  9. An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.

    1994-01-01

    An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.

  10. TangibleCubes — Implementation of Tangible User Interfaces through the Usage of Microcontroller and Sensor Technology

    NASA Astrophysics Data System (ADS)

    Setscheny, Stephan

    The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.

  11. Flight Telerobotic Servicer prototype simulator

    NASA Astrophysics Data System (ADS)

    Schein, Rob; Krauze, Linda; Hartley, Craig; Dickenson, Alan; Lavecchia, Tom; Working, Bob

    A prototype simulator for the Flight Telerobotic Servicer (FTS) system is described for use in the design development of the FTS, emphasizing the hand controller and user interface. The simulator utilizes a graphics workstation based on rapid prototyping tools for systems analyses of the use of the user interface and the hand controller. Kinematic modeling, manipulator-control algorithms, and communications programs are contained in the software for the simulator. The hardwired FTS panels and operator interface for use on the STS Orbiter are represented graphically, and the simulated controls function as the final FTS system configuration does. The robotic arm moves based on the user hand-controller interface, and the joint angles and other data are given on the prototype of the user interface. This graphics simulation tool provides the means for familiarizing crewmembers with the FTS system operation, displays, and controls.

  12. Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.

    PubMed

    Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís

    2010-01-01

    This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.

  13. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. INL Multi-Robot Control Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2005-03-30

    The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robot’s condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.

  15. "I'm Keeping Those There, Are You?": The Role of a New User Interface Paradigm--Separate Control of Shared Space (SCOSS)--in the Collaborative Decision-Making Process

    ERIC Educational Resources Information Center

    Kerawalla, Lucinda; Pearce, Darren; Yuill, Nicola; Luckin, Rosemary; Harris, Amanda

    2008-01-01

    We take a socio-cultural approach to comparing how dual control of a new user interface paradigm--Separate Control of Shared Space (SCOSS)--and dual control of a single user interface can work to mediate the collaborative decision-making process between pairs of children carrying out a multiple categorisation word task on a shared computer.…

  16. Human-telerobot interactions - Information, control, and mental models

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Gillan, Douglas J.

    1987-01-01

    A part of the NASA's Space Station will be a teleoperated robot (telerobot) with arms for grasping and manipulation, feet for holding onto objects, and television cameras for visual feedback. The objective of the work described in this paper is to develop the requirements and specifications for the user-telerobot interface and to determine through research and testing that the interface results in efficient system operation. The focus of the development of the user-telerobot interface is on the information required by the user, the user inputs, and the design of the control workstation. Closely related to both the information required by the user and the user's control of the telerobot is the user's mental model of the relationship between the control inputs and the telerobot's actions.

  17. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  18. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  19. Designing the user interface: strategies for effective human-computer interaction

    NASA Astrophysics Data System (ADS)

    Shneiderman, B.

    1998-03-01

    In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.

  20. WIFIP: a web-based user interface for automated synchrotron beamlines.

    PubMed

    Sallaz-Damaz, Yoann; Ferrer, Jean Luc

    2017-09-01

    The beamline control software, through the associated graphical user interface (GUI), is the user access point to the experiment, interacting with synchrotron beamline components and providing automated routines. FIP, the French beamline for the Investigation of Proteins, is a highly automatized macromolecular crystallography (MX) beamline at the European Synchrotron Radiation Facility. On such a beamline, a significant number of users choose to control their experiment remotely. This is often performed with a limited bandwidth and from a large choice of computers and operating systems. Furthermore, this has to be possible in a rapidly evolving experimental environment, where new developments have to be easily integrated. To face these challenges, a light, platform-independent, control software and associated GUI are required. Here, WIFIP, a web-based user interface developed at FIP, is described. Further than being the present FIP control interface, WIFIP is also a proof of concept for future MX control software.

  1. Space Segment (SS) and the Navigation User Segment (US) Interface Control Document (ICD)

    DOT National Transportation Integrated Search

    1993-10-10

    This Interface Control Document (ICD) defines the requirements related to the interface between the Space Segment (SS) of the Global Positioning System (GPS) and the Navigation Users Segment of the GPS. 2880k, 154p.

  2. An EMG-based robot control scheme robust to time-varying EMG signal features.

    PubMed

    Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J

    2010-05-01

    Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.

  3. Adaptive Interfaces

    DTIC Science & Technology

    1990-11-01

    to design and implement an adaptive intelligent interface for a command-and-control-style domain. The primary functionality of the resulting...technical tasks, as follows: 1. Analysis of Current Interface Technologies 2. Dejineation of User Roles 3. Development of User Models 4. Design of Interface...Management Association (FEMA). In the initial version of the prototype, two distin-t user models were designed . One type of user modeled by the system is

  4. Customizing graphical user interface technology for spacecraft control centers

    NASA Technical Reports Server (NTRS)

    Beach, Edward; Giancola, Peter; Gibson, Steven; Mahmot, Ronald

    1993-01-01

    The Transportable Payload Operations Control Center (TPOCC) project is applying the latest in graphical user interface technology to the spacecraft control center environment. This project of the Mission Operations Division's (MOD) Control Center Systems Branch (CCSB) at NASA Goddard Space Flight Center (GSFC) has developed an architecture for control centers which makes use of a distributed processing approach and the latest in Unix workstation technology. The TPOCC project is committed to following industry standards and using commercial off-the-shelf (COTS) hardware and software components wherever possible to reduce development costs and to improve operational support. TPOCC's most successful use of commercial software products and standards has been in the development of its graphical user interface. This paper describes TPOCC's successful use and customization of four separate layers of commercial software products to create a flexible and powerful user interface that is uniquely suited to spacecraft monitoring and control.

  5. A Prototype Lisp-Based Soft Real-Time Object-Oriented Graphical User Interface for Control System Development

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Wong, Edmond; Simon, Donald L.

    1994-01-01

    A prototype Lisp-based soft real-time object-oriented Graphical User Interface for control system development is presented. The Graphical User Interface executes alongside a test system in laboratory conditions to permit observation of the closed loop operation through animation, graphics, and text. Since it must perform interactive graphics while updating the screen in real time, techniques are discussed which allow quick, efficient data processing and animation. Examples from an implementation are included to demonstrate some typical functionalities which allow the user to follow the control system's operation.

  6. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  7. Gestures in an Intelligent User Interface

    NASA Astrophysics Data System (ADS)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  8. The Diamond Beamline Controls and Data Acquisition Software Architecture

    NASA Astrophysics Data System (ADS)

    Rees, N.

    2010-06-01

    The software for the Diamond Light Source beamlines[1] is based on two complementary software frameworks: low level control is provided by the Experimental Physics and Industrial Control System (EPICS) framework[2][3] and the high level user interface is provided by the Java based Generic Data Acquisition or GDA[4][5]. EPICS provides a widely used, robust, generic interface across a wide range of hardware where the user interfaces are focused on serving the needs of engineers and beamline scientists to obtain detailed low level views of all aspects of the beamline control systems. The GDA system provides a high-level system that combines an understanding of scientific concepts, such as reciprocal lattice coordinates, a flexible python syntax scripting interface for the scientific user to control their data acquisition, and graphical user interfaces where necessary. This paper describes the beamline software architecture in more detail, highlighting how these complementary frameworks provide a flexible system that can accommodate a wide range of requirements.

  9. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    PubMed

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Multi-robot control interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruemmer, David J; Walton, Miles C

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less

  11. User interface support

    NASA Technical Reports Server (NTRS)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  12. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  13. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G [Albuquerque, NM

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  14. Human-computer interface incorporating personal and application domains

    DOEpatents

    Anderson, Thomas G.

    2004-04-20

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  15. Role-Based And Adaptive User Interface Designs In A Teledermatology Consult System: A Way To Secure And A Way To Enhance

    PubMed Central

    Lin, Yi-Jung; Speedie, Stuart

    2003-01-01

    User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419

  16. A novel asynchronous access method with binary interfaces

    PubMed Central

    2008-01-01

    Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797

  17. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  18. Intelligent user interface concept for space station

    NASA Technical Reports Server (NTRS)

    Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen

    1986-01-01

    The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.

  19. User participation in the development of the human/computer interface for control centers

    NASA Technical Reports Server (NTRS)

    Broome, Richard; Quick-Campbell, Marlene; Creegan, James; Dutilly, Robert

    1996-01-01

    Technological advances coupled with the requirements to reduce operations staffing costs led to the demand for efficient, technologically-sophisticated mission operations control centers. The control center under development for the earth observing system (EOS) is considered. The users are involved in the development of a control center in order to ensure that it is cost-efficient and flexible. A number of measures were implemented in the EOS program in order to encourage user involvement in the area of human-computer interface development. The following user participation exercises carried out in relation to the system analysis and design are described: the shadow participation of the programmers during a day of operations; the flight operations personnel interviews; and the analysis of the flight operations team tasks. The user participation in the interface prototype development, the prototype evaluation, and the system implementation are reported on. The involvement of the users early in the development process enables the requirements to be better understood and the cost to be reduced.

  20. Multi-modal virtual environment research at Armstrong Laboratory

    NASA Technical Reports Server (NTRS)

    Eggleston, Robert G.

    1995-01-01

    One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.

  1. Physician acceptance of the IRIS user interface during a clinical trial at the Ottawa Civic Hospital

    NASA Astrophysics Data System (ADS)

    Coristine, Marjorie; Beeton, Carolyn; Tombaugh, Jo W.; Ahuja, J.; Belanger, Garry; Dillon, Richard F.; Currie, Shawn; Hind, E.

    1990-07-01

    During a clinical trial, emergency physicians and radiologists at the Ottawa Civic Hospital used IRIS (Integrated Radiological Information System) to process patients' x-rays, requisitions, and reports, and to have consultations, for 319 active cases. This paper discusses IRIS user interface issues raised during the clinical trial. The IRIS workstation consists of three major system components: 1) an image screen for viewing and enhancing images; 2) a control screen for presenting patient information, selecting images, and executing commands; and 3) a hands-free telephone for reporting activities and consultations. The control screen and hands-free telephone user interface allow physicians to navigate through patient files, select images and access reports, enter new reports, and perform remote consultations. Physicians were observed using the system during the trial and responded to questions about the user interface on an extensive debriefing interview after the trial. Overall, radiologists and emergency physicians were satisfied with IRIS control screen functionality and user interface. In a number of areas radiologists and emergency physicians differed in their user interface needs. Some features were found to be acceptable to one group of physicians but required modification to meet the needs of the other physician group. The data from the interviews, along with the comments from radiologists and emergency physicians provided important information for the revision of some features, and for the evolution of new features.

  2. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  3. New ergonomic headset for Tongue-Drive System with wireless smartphone interface.

    PubMed

    Park, Hangue; Kim, Jeonghee; Huo, Xueliang; Hwang, In-O; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue-operated assistive technology (AT), developed for people with severe physical disabilities to control their environment using their tongue motion. We have developed a new ergonomic headset for the TDS with a user-friendly smartphone interface, through which users will be able to wirelessly control various devices, access computers, and drive wheelchairs. This headset design is expected to act as a flexible and multifunctional communication interface for the TDS and improve its usability, accessibility, aesthetics, and convenience for the end users.

  4. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.

  5. Cognition-based development and evaluation of ergonomic user interfaces for medical image processing and archiving systems.

    PubMed

    Demiris, A M; Meinzer, H P

    1997-01-01

    Whether or not a computerized system enhances the conditions of work in the application domain, very much demands on the user interface. Graphical user interfaces seem to attract the interest of the users but mostly ignore some basic rules of visual information processing thus leading to systems which are difficult to use, lowering productivity and increasing working stress (cognitive and work load). In this work we present some fundamental ergonomic considerations and their application to the medical image processing and archiving domain. We introduce the extensions to an existing concept needed to control and guide the development of GUIs with respect to domain specific ergonomics. The suggested concept, called Model-View-Controller Constraints (MVCC), can be used to programmatically implement ergonomic constraints, and thus has some advantages over written style guides. We conclude with the presentation of existing norms and methods to evaluate user interfaces.

  6. Systems, methods, and products for graphically illustrating and controlling a droplet actuator

    NASA Technical Reports Server (NTRS)

    Brafford, Keith R. (Inventor); Pamula, Vamsee K. (Inventor); Paik, Philip Y. (Inventor); Pollack, Michael G. (Inventor); Sturmer, Ryan A. (Inventor); Smith, Gregory F. (Inventor)

    2010-01-01

    Systems for controlling a droplet microactuator are provided. According to one embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, and a display device displaying a user interface electronically coupled to the controller, wherein the system is programmed and configured to permit a user to effect a droplet manipulation by interacting with the user interface. According to another embodiment, a system is provided and includes a processor, a display device electronically coupled to the processor, and software loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller and programmed to display an interactive map of a droplet microactuator. According to yet another embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, a display device displaying a user interface electronically coupled to the controller, and software for executing a protocol loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller.

  7. Eye-gaze determination of user intent at the computer interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-12-31

    Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less

  8. Trans-Interface Optical Communication (TIOC)

    DTIC Science & Technology

    2008-01-01

    communication interface 4. Bitmap stream creation 5. Display thread 6. DMD activeX control 7. DMD communication 8. System timing/control 9...o DMD activeX control o DMD communication o System timing/control o Graphical user interface (GUI) • All components are available for

  9. Transportable Applications Environment Plus, Version 5.1

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.

  10. Use of force feedback to enhance graphical user interfaces

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.; Brave, Scott

    1996-04-01

    This project focuses on the use of force feedback sensations to enhance user interaction with standard graphical user interface paradigms. While typical joystick and mouse devices are input-only, force feedback controllers allow physical sensations to be reflected to a user. Tasks that require users to position a cursor on a given target can be enhanced by applying physical forces to the user that aid in targeting. For example, an attractive force field implemented at the location of a graphical icon can greatly facilitate target acquisition and selection of the icon. It has been shown that force feedback can enhance a users ability to perform basic functions within graphical user interfaces.

  11. User interaction with the LUCIFER control software

    NASA Astrophysics Data System (ADS)

    Knierim, Volker; Jütte, Marcus; Polsterer, Kai; Schimmelmann, Jan

    2006-06-01

    We present the concept and design of the interaction between users and the LUCIFER Control Software Package. The necessary functionality that must be provided to a user depends on and differs greatly for the different user types (i.e., engineers and observers). While engineers want total control over every service provided by the software system, observers are typically only interested in a fault tolerant and efficient user interface that helps them to carry out their observations in the best possible way during the night. To provide the functionality engineers need, direct access to a service is necessary. This may harbor a possible threat to the instrument in the case of a faulty operation by the engineer, but is the only way to test every unit during integration and commissioning of the instrument, and for service time later on. The observer on the other hand should only have indirect access to the instrument, controlled by an instrument manager service that ensures the necessary safety checks so that no harm can be done to the instrument. Our design of the user interaction provides such an approach on a level that is transparent to any interaction component regardless of interface type (i.e., textual or graphical). Using the interface and inheritance concepts of the Java Programming Language and its tools to create graphical user interfaces, it is possible to provide the necessary level of flexibility for the different user types on one side, while ensuring maximum reusability of code on the other side.

  12. Speech-recognition interfaces for music information retrieval

    NASA Astrophysics Data System (ADS)

    Goto, Masataka

    2005-09-01

    This paper describes two hands-free music information retrieval (MIR) systems that enable a user to retrieve and play back a musical piece by saying its title or the artist's name. Although various interfaces for MIR have been proposed, speech-recognition interfaces suitable for retrieving musical pieces have not been studied. Our MIR-based jukebox systems employ two different speech-recognition interfaces for MIR, speech completion and speech spotter, which exploit intentionally controlled nonverbal speech information in original ways. The first is a music retrieval system with the speech-completion interface that is suitable for music stores and car-driving situations. When a user only remembers part of the name of a musical piece or an artist and utters only a remembered fragment, the system helps the user recall and enter the name by completing the fragment. The second is a background-music playback system with the speech-spotter interface that can enrich human-human conversation. When a user is talking to another person, the system allows the user to enter voice commands for music playback control by spotting a special voice-command utterance in face-to-face or telephone conversations. Experimental results from use of these systems have demonstrated the effectiveness of the speech-completion and speech-spotter interfaces. (Video clips: http://staff.aist.go.jp/m.goto/MIR/speech-if.html)

  13. Interface Anywhere: Development of a Voice and Gesture System for Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby; Haddock, Maxwell; Overland, David

    2013-01-01

    The Interface Anywhere Project was funded through Innovation Charge Account (ICA) at NASA JSC in the Fall of 2012. The project was collaboration between human factors and engineering to explore the possibility of designing an interface to control basic habitat operations through gesture and voice control; (a) Current interfaces require the users to be physically near an input device in order to interact with the system; and (b) By using voice and gesture commands, the user is able to interact with the system anywhere they want within the work environment.

  14. Designing a Humane Multimedia Interface for the Visually Impaired.

    ERIC Educational Resources Information Center

    Ghaoui, Claude; Mann, M.; Ng, Eng Huat

    2001-01-01

    Promotes the provision of interfaces that allow users to access most of the functionality of existing graphical user interfaces (GUI) using speech. Uses the design of a speech control tool that incorporates speech recognition and synthesis into existing packaged software such as Teletext, the Internet, or a word processor. (Contains 22…

  15. Kinesthetic Force Feedback and Belt Control for the Treadport Locomotion Interface.

    PubMed

    Hejrati, Babak; Crandall, Kyle L; Hollerbach, John M; Abbott, Jake J

    2015-01-01

    This paper describes an improved control system for the Treadport immersive locomotion interface, with results that generalize to any treadmill that utilizes an actuated tether to enable self-selected walking speed. A new belt controller is implemented to regulate the user's position; when combined with the user's own volition, this controller also enables the user to naturally self-select their walking speed as they would when walking over ground. A new kinesthetic-force-feedback controller is designed for the tether that applies forces to the user's torso. This new controller is derived based on maintaining the user's sense of balance during belt acceleration, rather than by rendering an inertial force as was done in our prior work. Based on the results of a human-subjects study, the improvements in both controllers significantly contribute to an improved perception of realistic walking on the Treadport. The improved control system uses intuitive dynamic-system and anatomical parameters and requires no ad hoc gain tuning. The control system simply requires three measurements to be made for a given user: the user's mass, the user's height, and the height of the tether attachment point on the user's torso.

  16. Who's Zooming Whom? Attunement to Animation in the Interface.

    ERIC Educational Resources Information Center

    Chui, Michael; Dillon, Andrew

    1997-01-01

    Two controlled experiments examined whether the animated zooming effect accompanying the opening or closing of a folder in the Apple Macintosh graphical user interface aids in the user's perception of which window corresponds to which folder. Results suggest users may become attuned to the informational content of the zooming effect with…

  17. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics.more » Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.« less

  18. Intuitive control of mobile robots: an architecture for autonomous adaptive dynamic behaviour integration.

    PubMed

    Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide

    2018-05-01

    In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.

  19. The effects of time delays on a telepathology user interface.

    PubMed Central

    Carr, D.; Hasegawa, H.; Lemmon, D.; Plaisant, C.

    1992-01-01

    Telepathology enables a pathologist to examine physically distant tissue samples by microscope operation over a communication link. Communication links can impose time delays which cause difficulties in controlling the remote device. Such difficulties were found in a microscope teleoperation system. Since the user interface is critical to pathologist's acceptance of telepathology, we redesigned the user interface for this system, built two different versions (a keypad whose movement commands operated by specifying a start command followed by a stop command and a trackball interface whose movement commands were incremental and directly proportional to the rotation of the trackball). We then conducted a pilot study to determine the effect of time delays on the new user interfaces. In our experiment, the keypad was the faster interface when the time delay is short. There was no evidence to favor either the keypad or trackball when the time delay was longer. Inexperienced participants benefitted by allowing them to move long distances over the microscope slide by dragging the field-of-view indicator on the touchscreen control panel. The experiment suggests that changes could be made to the trackball interface which would improve its performance. PMID:1482878

  20. Traffic Generator (TrafficGen) Version 1.4.2: Users Guide

    DTIC Science & Technology

    2016-06-01

    events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with

  1. Young Children's Skill in Using a Mouse to Control a Graphical Computer Interface.

    ERIC Educational Resources Information Center

    Crook, Charles

    1992-01-01

    Describes a study that investigated the performance of preschoolers and children in the first three years of formal education on tasks that involved skills using a mouse-based control of a graphical computer interface. The children's performance is compared with that of novice adult users and expert users. (five references) (LRW)

  2. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands

    PubMed Central

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961

  3. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands.

    PubMed

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.

  4. Save medical personnel's time by improved user interfaces.

    PubMed

    Kindler, H

    1997-01-01

    Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.

  5. Finding and Exploring Health Information with a Slider-Based User Interface.

    PubMed

    Pang, Patrick Cheong-Iao; Verspoor, Karin; Pearce, Jon; Chang, Shanton

    2016-01-01

    Despite the fact that search engines are the primary channel to access online health information, there are better ways to find and explore health information on the web. Search engines are prone to problems when they are used to find health information. For instance, users have difficulties in expressing health scenarios with appropriate search keywords, search results are not optimised for medical queries, and the search process does not account for users' literacy levels and reading preferences. In this paper, we describe our approach to addressing these problems by introducing a novel design using a slider-based user interface for discovering health information without the need for precise search keywords. The user evaluation suggests that the interface is easy to use and able to assist users in the process of discovering new information. This study demonstrates the potential value of adopting slider controls in the user interface of health websites for navigation and information discovery.

  6. Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia.

    PubMed

    Thorp, Elias B; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica P; Pierella, Camilla; Roth, Elliot J; Seanez Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A

    2016-02-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user's residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional control commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control a power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control.

  7. User interface customization on Endoscopy Department Mini-PACS and its impact on examination workflow

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Kaise, Mitsuru; Ozeki, Takeshi; Tsunakawa, Hirofumi; Tsunakawa, Kiyoshi; Takayanagi, Takashi; Suzuki, Nobuaki; Miwa, Jun; Ohta, Yasuhiko; Kanai, Koichi

    1999-07-01

    We have proposed a new user interface with workflow customization, implemented and evaluted in Endoscopy Department Mini-PACS that has been introduced and routinely used for two years at Toshiba General Hospital. We have set some task at endoscopy image acquisition units during examinations for two different types of user interfaces and compared performance. One is a command-button based operation using a remote control, and another is that with eight graphic buttons which are displayed on a CRT monitor and can be customized. Results of the two-year study show that mean number of input diagnosis codes per examination with graphic and customized interface is significantly greater than that with conventional interface. Also, mean time to complete one upper gastric endoscopy examination with new user interface is about 17 percent less than that with conventional interface. These result suggest that systems with the visualized and customized operation and feedback encourages physicians to use more functions and to compete tasks more efficiently than systems with conventional command-button based user interfaces.

  8. Interface design and human factors considerations for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.

  9. Interface Design and Human Factors Considerations for Model-Based Tight Glycemic Control in Critical Care

    PubMed Central

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330

  10. Design of Flight Control Panel Layout using Graphical User Interface in MATLAB

    NASA Astrophysics Data System (ADS)

    Wirawan, A.; Indriyanto, T.

    2018-04-01

    This paper introduces the design of Flight Control Panel (FCP) Layout using Graphical User Interface in MATLAB. The FCP is the interface to give the command to the simulation and to monitor model variables while the simulation is running. The command accommodates by the FCP are altitude command, the angle of sideslip command, heading command, and setting command for turbulence model. The FCP was also designed to monitor the flight parameter while the simulation is running.

  11. Spatial issues in user interface design from a graphic design perspective

    NASA Technical Reports Server (NTRS)

    Marcus, Aaron

    1989-01-01

    The user interface of a computer system is a visual display that provides information about the status of operations on data within the computer and control options to the user that enable adjustments to these operations. From the very beginning of computer technology the user interface was a spatial display, although its spatial features were not necessarily complex or explicitly recognized by the users. All text and nonverbal signs appeared in a virtual space generally thought of as a single flat plane of symbols. Current technology of high performance workstations permits any element of the display to appear as dynamic, multicolor, 3-D signs in a virtual 3-D space. The complexity of appearance and the user's interaction with the display provide significant challenges to the graphic designer of current and future user interfaces. In particular, spatial depiction provides many opportunities for effective communication of objects, structures, processes, navigation, selection, and manipulation. Issues are presented that are relevant to the graphic designer seeking to optimize the user interface's spatial attributes for effective visual communication.

  12. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

  13. Space station automation of common module power management and distribution, volume 2

    NASA Technical Reports Server (NTRS)

    Ashworth, B.; Riedesel, J.; Myers, C.; Jakstas, L.; Smith, D.

    1990-01-01

    The new Space Station Module Power Management and Distribution System (SSM/PMAD) testbed automation system is described. The subjects discussed include testbed 120 volt dc star bus configuration and operation, SSM/PMAD automation system architecture, fault recovery and management expert system (FRAMES) rules english representation, the SSM/PMAD user interface, and the SSM/PMAD future direction. Several appendices are presented and include the following: SSM/PMAD interface user manual version 1.0, SSM/PMAD lowest level processor (LLP) reference, SSM/PMAD technical reference version 1.0, SSM/PMAD LLP visual control logic representation's (VCLR's), SSM/PMAD LLP/FRAMES interface control document (ICD) , and SSM/PMAD LLP switchgear interface controller (SIC) ICD.

  14. Assessment of a User Guide for One Semi-Automated Forces (OneSAF) Version 2.0

    DTIC Science & Technology

    2009-09-01

    OneSAF uses a two-dimensional feature named a Plan View Display ( PVD ) as the primary graphical interface. The PVD replicates a map with a series...primary interface, the PVD is how the user watches the scenario unfold and requires the most interaction with the user. As seen in Table 3, all...participant indicated never using these seven map-related functions. Graphic control measures. Graphic control measures are applied to the PVD map to

  15. Imagining a Stata / Python Combination

    NASA Technical Reports Server (NTRS)

    Fiedler, James

    2012-01-01

    There are occasions when a task is difficult in Stata, but fairly easy in a more general programming language. Python is a popular language for a range of uses. It is easy to use, has many high ]quality packages, and programs can be written relatively quickly. Is there any advantage in combining Stata and Python within a single interface? Stata already offers support for user-written programs, which allow extensive control over calculations, but somewhat less control over graphics. Also, except for specifying output, the user has minimal programmatic control over the user interface. Python can be used in a way that allows more control over the interface and graphics, and in so doing provide a roundabout method for satisfying some user requests (e.g., transparency levels in graphics and the ability to clear the results window). My talk will explore these ideas, present a possible method for combining Stata and Python, and give examples to demonstrate how this combination might be useful.

  16. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  17. Human computer interface guide, revision A

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.

  18. Probabilistic vs linear blending approaches to shared control for wheelchair driving.

    PubMed

    Ezeh, Chinemelu; Trautman, Pete; Devigne, Louise; Bureau, Valentin; Babel, Marie; Carlson, Tom

    2017-07-01

    Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.

  19. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  20. Adaptive Motor Resistance Video Game Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Reich, Alton (Inventor); Shaw, James (Inventor)

    2015-01-01

    The invention comprises a method and/or an apparatus using computer configured exercise equipment and an electric motor provided physical resistance in conjunction with a game system, such as a video game system, where the exercise system provides real physical resistance to a user interface. Results of user interaction with the user interface are integrated into a video game, such as running on a game console. The resistance system comprises: a subject interface, software control, a controller, an electric servo assist/resist motor, an actuator, and/or a subject sensor. The system provides actual physical interaction with a resistance device as input to the game console and game run thereon.

  1. Teleoperation of Robonaut Using Finger Tracking

    NASA Technical Reports Server (NTRS)

    Champoux, Rachel G.; Luo, Victor

    2012-01-01

    With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.

  2. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  3. Unified Desktop for Monitoring & Control Applications - The Open Navigator Framework Applied for Control Centre and EGSE Applications

    NASA Astrophysics Data System (ADS)

    Brauer, U.

    2007-08-01

    The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").

  4. Graphical User Interface Development and Design to Support Airport Runway Configuration Management

    NASA Technical Reports Server (NTRS)

    Jones, Debra G.; Lenox, Michelle; Onal, Emrah; Latorella, Kara A.; Lohr, Gary W.; Le Vie, Lisa

    2015-01-01

    The objective of this effort was to develop a graphical user interface (GUI) for the National Aeronautics and Space Administration's (NASA) System Oriented Runway Management (SORM) decision support tool to support runway management. This tool is expected to be used by traffic flow managers and supervisors in the Airport Traffic Control Tower (ATCT) and Terminal Radar Approach Control (TRACON) facilities.

  5. An approach to developing user interfaces for space systems

    NASA Astrophysics Data System (ADS)

    Shackelford, Keith; McKinney, Karen

    1993-08-01

    Inherent weakness in the traditional waterfall model of software development has led to the definition of the spiral model. The spiral model software development lifecycle model, however, has not been applied to NASA projects. This paper describes its use in developing real time user interface software for an Environmental Control and Life Support System (ECLSS) Process Control Prototype at NASA's Marshall Space Flight Center.

  6. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  7. Pulser: user-friendly, graphical user-interface based software for controlling stimuli during data acquisition with Spike2 for Windows.

    PubMed

    Lidierth, Malcolm

    2005-02-15

    This paper describes software that runs in the Spike2 for Windows environment and provides a versatile tool for generating stimuli during data acquisition from the 1401 family of interfaces (CED, UK). A graphical user interface (GUI) is used to provide dynamic control of stimulus timing. Both single stimuli and trains of stimuli can be generated. The pulse generation routines make use of programmable variables within the interface and allow these to be rapidly changed during an experiment. The routines therefore provide the ease-of-use associated with external, stand-alone pulse generators. Complex stimulus protocols can be loaded from an external text file and facilities are included to create these files through the GUI. The software consists of a Spike2 script that runs in the host PC, and accompanying routines written in the 1401 sequencer control code, that run in the 1401 interface. Handshaking between the PC and the interface card are built into the routines and provides for full integration of sampling, analysis and stimulus generation during an experiment. Control of the 1401 digital-to-analogue converters is also provided; this allows control of stimulus amplitude as well as timing and also provides a sample-hold feature that may be used to remove DC offsets and drift from recorded data.

  8. Linear Quadratic Gaussian Controller Design Using a Graphical User Interface: Application to the Beam-Waveguide Antennas

    NASA Astrophysics Data System (ADS)

    Maneri, E.; Gawronski, W.

    1999-10-01

    The linear quadratic Gaussian (LQG) design algorithms described in [2] and [5] have been used in the controller design of JPL's beam-waveguide [5] and 70-m [6] antennas. This algorithm significantly improves tracking precision in a windy environment. This article describes the graphical user interface (GUI) software for the design LQG controllers. It consists of two parts: the basic LQG design and the fine-tuning of the basic design using a constrained optimization algorithm. The presented GUI was developed to simplify the design process, to make the design process user-friendly, and to enable design of an LQG controller for one with a limited control engineering background. The user is asked to manipulate the GUI sliders and radio buttons to watch the antenna performance. Simple rules are given at the GUI display.

  9. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  10. MTVis: tree exploration using a multitouch interface

    NASA Astrophysics Data System (ADS)

    Andrews, David; Teoh, Soon Tee

    2010-01-01

    We present MTVis, a multi-touch interactive tree visualization system. The multi-touch interface display hardware is built using the LED-LP technology, and the tree layout is based on RINGS, but enhanced with multitouch interactions. We describe the features of the system, and how the multi-touch interface enhances the user's experience in exploring the tree data structure. In particular, the multi-touch interface allows the user to simultaneously control two child nodes of the root, and rotate them so that some nodes are magnified, while preserving the layout of the tree. We also describe the other meaninful touch screen gestures the users can use to intuitively explore the tree.

  11. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  12. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  13. Network Control Center User Planning System (NCC UPS)

    NASA Astrophysics Data System (ADS)

    Dealy, Brian

    1991-09-01

    NCC UPS is presented in the form of the viewgraphs. The following subject areas are covered: UPS overview; NCC UPS role; major NCC UPS functional requirements; interactive user access levels; UPS interfaces; interactive user subsystem; interface navigation; scheduling screen hierarchy; interactive scheduling input panels; autogenerated schedule request panel; schedule data tabular display panel; schedule data graphic display panel; graphic scheduling aid design; and schedule data graphic display.

  14. Network Control Center User Planning System (NCC UPS)

    NASA Technical Reports Server (NTRS)

    Dealy, Brian

    1991-01-01

    NCC UPS is presented in the form of the viewgraphs. The following subject areas are covered: UPS overview; NCC UPS role; major NCC UPS functional requirements; interactive user access levels; UPS interfaces; interactive user subsystem; interface navigation; scheduling screen hierarchy; interactive scheduling input panels; autogenerated schedule request panel; schedule data tabular display panel; schedule data graphic display panel; graphic scheduling aid design; and schedule data graphic display.

  15. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    PubMed

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  16. Comparison of electromyography and force as interfaces for prosthetic control.

    PubMed

    Corbett, Elaine A; Perreault, Eric J; Kuiken, Todd A

    2011-01-01

    The ease with which persons with upper-limb amputations can control their powered prostheses is largely determined by the efficacy of the user command interface. One needs to understand the abilities of the human operator regarding the different available options. Electromyography (EMG) is widely used to control powered upper-limb prostheses. It is an indirect estimator of muscle force and may be expected to limit the control capabilities of the prosthesis user. This study compared EMG control with force control, an interface that is used in everyday interactions with the environment. We used both methods to perform a position-tracking task. Direct-position control of the wrist provided an upper bound for human-operator capabilities. The results demonstrated that an EMG control interface is as effective as force control for the position-tracking task. We also examined the effects of gain and tracking frequency on EMG control to explore the limits of this control interface. We found that information transmission rates for myoelectric control were best at higher tracking frequencies than at the frequencies previously reported for position control. The results may be useful for the design of prostheses and prosthetic controllers.

  17. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  18. Design of a mobile brain computer interface-based smart multimedia controller.

    PubMed

    Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-03-06

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.

  19. Adapting human-machine interfaces to user performance.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2008-01-01

    The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.

  20. Common command-and-control user interface for current force UGS

    NASA Astrophysics Data System (ADS)

    Stolovy, Gary H.

    2009-05-01

    The Current Force Unattended Ground Sensors (UGS) comprise the OmniSense, Scorpion, and Silent Watch systems. As deployed by U.S. Army Central Command in 2006, sensor reports from the three systems were integrated into a common Graphical User Interface (GUI), with three separate vendor-specific applications for Command-and-Control (C2) functions. This paper describes the requirements, system architecture, implementation, and testing of an upgrade to the Processing, Exploitation, and Dissemination back-end server to incorporate common remote Command-and-Control capabilities.

  1. Graphical User Interface for an Observing Control System for the UK Infrared Telescope

    NASA Astrophysics Data System (ADS)

    Tan, M.; Bridger, A.; Wright, G. S.; Adamson, A. J.; Currie, M. J.; Economou, F.

    A Graphical user interface for the observing control system of UK Infrared Telescope has been developed as a part of the ORAC (Observatory Reduction and Acquisition Control) Project. We analyzed and designed the system using the Unified Modelling Language (UML) with the CASE tool Rational Rose 98. The system has been implemented in a modular way with Java packages using Swing and RMI. This system is component-based with pluggability. Object orientation concepts and UML notations have been applied throughout the development.

  2. International Space Station Alpha user payload operations concept

    NASA Technical Reports Server (NTRS)

    Schlagheck, Ronald A.; Crysel, William B.; Duncan, Elaine F.; Rider, James W.

    1994-01-01

    International Space Station Alpha (ISSA) will accommodate a variety of user payloads investigating diverse scientific and technology disciplines on behalf of five international partners: Canada, Europe, Japan, Russia, and the United States. A combination of crew, automated systems, and ground operations teams will control payload operations that require complementary on-board and ground systems. This paper presents the current planning for the ISSA U.S. user payload operations concept and the functional architecture supporting the concept. It describes various NASA payload operations facilities, their interfaces, user facility flight support, the payload planning system, the onboard and ground data management system, and payload operations crew and ground personnel training. This paper summarizes the payload operations infrastructure and architecture developed at the Marshall Space Flight Center (MSFC) to prepare and conduct ISSA on-orbit payload operations from the Payload Operations Integration Center (POIC), and from various user operations locations. The authors pay particular attention to user data management, which includes interfaces with both the onboard data management system and the ground data system. Discussion covers the functional disciplines that define and support POIC payload operations: Planning, Operations Control, Data Management, and Training. The paper describes potential interfaces between users and the POIC disciplines, from the U.S. user perspective.

  3. OpenSQUID: A Flexible Open-Source Software Framework for the Control of SQUID Electronics

    DOE PAGES

    Jaeckel, Felix T.; Lafler, Randy J.; Boyd, S. T. P.

    2013-02-06

    We report commercially available computer-controlled SQUID electronics are usually delivered with software providing a basic user interface for adjustment of SQUID tuning parameters, such as bias current, flux offset, and feedback loop settings. However, in a research context it would often be useful to be able to modify this code and/or to have full control over all these parameters from researcher-written software. In the case of the STAR Cryoelectronics PCI/PFL family of SQUID control electronics, the supplied software contains modules for automatic tuning and noise characterization, but does not provide an interface for user code. On the other hand, themore » Magnicon SQUIDViewer software package includes a public application programming interface (API), but lacks auto-tuning and noise characterization features. To overcome these and other limitations, we are developing an "open-source" framework for controlling SQUID electronics which should provide maximal interoperability with user software, a unified user interface for electronics from different manufacturers, and a flexible platform for the rapid development of customized SQUID auto-tuning and other advanced features. Finally, we have completed a first implementation for the STAR Cryoelectronics hardware and have made the source code for this ongoing project available to the research community on SourceForge (http://opensquid.sourceforge.net) under the GNU public license.« less

  4. A Universal Portable Appliance for Stellarator W7-X Power Supply Controlling

    NASA Astrophysics Data System (ADS)

    Xu, Wei-hua; Wolfgang, Foerster; Guenter, Mueller

    2001-06-01

    In the project Wendelstein 7-X (W7-X), the popular fieldbus Profibus has been determined as a uniform connection between the central control system and all the subordinate systems. A universal embedded control system has been developed for W7-X power supply controlling. Siemens 80C167CR microcontroller is used as the central control unit of the system. With a user-defined printed circuit board (PCB) several control buses, i.e., Profibus, CAN, IEEE 488, RS485 and RS 232 have been connected to the microcontroller. The corresponding hardware interfaces for the control buses have been designed. A graphic liquid crystal display(LCD) and a user-defined keyboard are used as user interface. The control software will be developed with a C-like language, i.e., C166 for the controller.

  5. Querying Event Sequences by Exact Match or Similarity Search: Design and Empirical Evaluation

    PubMed Central

    Wongsuphasawat, Krist; Plaisant, Catherine; Taieb-Maimon, Meirav; Shneiderman, Ben

    2012-01-01

    Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Most graphical user interfaces for database search use an exact match approach, which is often effective, but near misses may also be of interest. We describe a new similarity search interface, in which users specify a query by simply placing events on a blank timeline and retrieve a similarity-ranked list of results. Behind this user interface is a new similarity measure for event sequences which the users can customize by four decision criteria, enabling them to adjust the impact of missing, extra, or swapped events or the impact of time shifts. We describe a use case with Electronic Health Records based on our ongoing collaboration with hospital physicians. A controlled experiment with 18 participants compared exact match and similarity search interfaces. We report on the advantages and disadvantages of each interface and suggest a hybrid interface combining the best of both. PMID:22379286

  6. Human-machine interface hardware: The next decade

    NASA Technical Reports Server (NTRS)

    Marcus, Elizabeth A.

    1991-01-01

    In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.

  7. Optimizing real-time Web-based user interfaces for observatories

    NASA Astrophysics Data System (ADS)

    Gibson, J. Duane; Pickering, Timothy E.; Porter, Dallan; Schaller, Skip

    2008-08-01

    In using common HTML/Ajax approaches for web-based data presentation and telescope control user interfaces at the MMT Observatory (MMTO), we rapidly were confronted with web browser performance issues. Much of the operational data at the MMTO is highly dynamic and is constantly changing during normal operations. Status of telescope subsystems must be displayed with minimal latency to telescope operators and other users. A major motivation of migrating toward web-based applications at the MMTO is to provide easy access to current and past observatory subsystem data for a wide variety of users on their favorite operating system through a familiar interface, their web browser. Performance issues, especially for user interfaces that control telescope subsystems, led to investigations of more efficient use of HTML/Ajax and web server technologies as well as other web-based technologies, such as Java and Flash/Flex. The results presented here focus on techniques for optimizing HTML/Ajax web applications with near real-time data display. This study indicates that direct modification of the contents or "nodeValue" attribute of text nodes is the most efficient method of updating data values displayed on a web page. Other optimization techniques are discussed for web-based applications that display highly dynamic data.

  8. How NASA KSC Controls Interfaces with the use of Motion Skeletons and Product Structure

    NASA Technical Reports Server (NTRS)

    Jones, Corey

    2013-01-01

    This presentation will show how NASA KSC controls interfaces for Modular Product Architecture (MPA) using Locator Skeletons, Interface Skeletons, and Product Structure, to be combined together within a Motion Skeleton. The user will learn how to utilize skeleton models to communicate interface data, as successfully done at NASA KSC in their use of Motion Skeletons to control interfaces for multi-launch systems. There will be discussion of the methodology used to control design requirements through WTParts, and how to utilize product structure for non-CAD documents.

  9. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  10. A distributed, graphical user interface based, computer control system for atomic physics experiments

    NASA Astrophysics Data System (ADS)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  11. A distributed, graphical user interface based, computer control system for atomic physics experiments.

    PubMed

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  12. Spying on Search Strategies

    ERIC Educational Resources Information Center

    Tenopir, Carol

    2004-01-01

    Only the most dedicated super-searchers are motivated to learn and control command systems, like DialogClassic, that rely on the user to input complex search strategies. Infrequent searchers and most end users choose interfaces that do some of the work for them and make the search process appear easy. However, the easier a good interface seems to…

  13. Designing the OPAC User Interface to Improve Access and Retrieval.

    ERIC Educational Resources Information Center

    Basista, Thomas; And Others

    1991-01-01

    Discussion of problems with retrieval of records in library online public access catalogs (OPACs) focuses on an ongoing research project at the Indiana University of Pennsylvania (IUP) that has been trying to improve subject retrieval vocabulary control using natural and thesaural language and on the design of a good graphical user interface.…

  14. International interface design for Space Station Freedom - Challenges and solutions

    NASA Technical Reports Server (NTRS)

    Mayo, Richard E.; Bolton, Gordon R.; Laurini, Daniele

    1988-01-01

    The definition of interfaces for the International Space Station is discussed, with a focus on negotiations between NASA and ESA. The program organization and division of responsibilities for the Space Station are outlined; the basic features of physical and functional interfaces are described; and particular attention is given to the interface management and documentation procedures, architectural control elements, interface implementation and verification, and examples of Columbus interface solutions (including mechanical, ECLSS, thermal-control, electrical, data-management, standardized user, and software interfaces). Diagrams, drawings, graphs, and tables listing interface types are provided.

  15. Representation-based user interfaces for the audiovisual library of the year 2000

    NASA Astrophysics Data System (ADS)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  16. Touchfree medical interfaces.

    PubMed

    Rossol, Nathaniel; Cheng, Irene; Rui Shen; Basu, Anup

    2014-01-01

    Real-time control of visual display systems via mid-air hand gestures offers many advantages over traditional interaction modalities. In medicine, for example, it allows a practitioner to adjust display values, e.g. contrast or zoom, on a medical visualization interface without the need to re-sterilize the interface. However, when users are holding a small tool (such as a pen, surgical needle, or computer stylus) the need to constantly put the tool down in order to make hand gesture interactions is not ideal. This work presents a novel interface that automatically adjusts for gesturing with hands and hand-held tools to precisely control medical displays. The novelty of our interface is that it uses a single set of gestures designed to be equally effective for fingers and hand-held tools without using markers. This type of interface was previously not feasible with low-resolution depth sensors such as Kinect, but is now achieved by using the recently released Leap Motion controller. Our interface is validated through a user study on a group of people given the task of adjusting parameters on a medical image.

  17. Intuitive wireless control of a robotic arm for people living with an upper body disability.

    PubMed

    Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B

    2015-08-01

    Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.

  18. A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.

    PubMed

    Hocraffer, Amy; Nam, Chang S

    2017-01-01

    A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Secure Web-based Ground System User Interfaces over the Open Internet

    NASA Technical Reports Server (NTRS)

    Langston, James H.; Murray, Henry L.; Hunt, Gary R.

    1998-01-01

    A prototype has been developed which makes use of commercially available products in conjunction with the Java programming language to provide a secure user interface for command and control over the open Internet. This paper reports successful demonstration of: (1) Security over the Internet, including encryption and certification; (2) Integration of Java applets with a COTS command and control product; (3) Remote spacecraft commanding using the Internet. The Java-based Spacecraft Web Interface to Telemetry and Command Handling (Jswitch) ground system prototype provides these capabilities. This activity demonstrates the use and integration of current technologies to enable a spacecraft engineer or flight operator to monitor and control a spacecraft from a user interface communicating over the open Internet using standard World Wide Web (WWW) protocols and commercial off-the-shelf (COTS) products. The core command and control functions are provided by the COTS Epoch 2000 product. The standard WWW tools and browsers are used in conjunction with the Java programming technology. Security is provided with the current encryption and certification technology. This system prototype is a step in the direction of giving scientist and flight operators Web-based access to instrument, payload, and spacecraft data.

  20. ANSO study: evaluation in an indoor environment of a mobile assistance robotic grasping arm.

    PubMed

    Coignard, P; Departe, J P; Remy Neris, O; Baillet, A; Bar, A; Drean, D; Verier, A; Leroux, C; Belletante, P; Le Guiet, J L

    2013-12-01

    To evaluate the reliability and functional acceptability of the ‘‘Synthetic Autonomous Majordomo’’ (SAM) robotic aid system (a mobile Neobotix base equipped with a semi-automatic vision interface and a Manus robotic arm). An open, multicentre, controlled study. We included 29 tetraplegic patients (23 patients with spinal cord injuries, 3 with locked-in syndrome and 4 with other disorders; mean SD age: 37.83 13.3) and 34 control participants (mean SD age: 32.44 11.2). The reliability of the user interface was evaluated in three multi-step scenarios: selection of the room in which the object to be retrieved was located (in the presence or absence of visual control by the user), selection of the object to be retrieved, the grasping of the object itself and the robot’s return to the user with the object. A questionnaire was used to assess the robot’s user acceptability. The SAM system was stable and reliable: both patients and control participants experienced few failures when completing the various stages of the scenarios. The graphic interface was effective for selecting and grasping the object – even in the absence of visual control. Users and carers were generally satisfied with SAM, although only a quarter of patients said that they would consider using the robot in their activities of daily living. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  1. Enhancing the Gaming Experience Using 3D Spatial User Interface Technologies.

    PubMed

    Kulshreshth, Arun; Pfeil, Kevin; LaViola, Joseph J

    2017-01-01

    Three-dimensional (3D) spatial user interface technologies have the potential to make games more immersive and engaging and thus provide a better user experience. Although technologies such as stereoscopic 3D display, head tracking, and gesture-based control are available for games, it is still unclear how their use affects gameplay and if there are any user performance benefits. The authors have conducted several experiments on these technologies in game environments to understand how they affect gameplay and how we can use them to optimize the gameplay experience.

  2. Human factors aspects of control room design

    NASA Technical Reports Server (NTRS)

    Jenkins, J. P.

    1983-01-01

    A plan for the design and analysis of a multistation control room is reviewed. It is found that acceptance of the computer based information system by the uses in the control room is mandatory for mission and system success. Criteria to improve computer/user interface include: match of system input/output with user; reliability, compatibility and maintainability; easy to learn and little training needed; self descriptive system; system under user control; transparent language, format and organization; corresponds to user expectations; adaptable to user experience level; fault tolerant; dialog capability user communications needs reflected in flexibility, complexity, power and information load; integrated system; and documentation.

  3. Design Guidelines and Criteria for User/Operator Transactions with Battlefield Automated Systems. Volume 5. Background Literature

    DTIC Science & Technology

    1981-02-01

    the machine . ARI’s efforts in this area focus on human perfor- mance problems related to interactions with command and control centers, and on issues...improvement of the user- machine interface. Lacking consistent design principles, current practice results in a fragmented and unsystematic approach to system...complexity in the user- machine interface of BAS, ARI supported this effort for develop- me:nt of an online language for Army tactical intelligence

  4. Interaction design challenges and solutions for ALMA operations monitoring and control

    NASA Astrophysics Data System (ADS)

    Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar

    2012-09-01

    The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.

  5. Evaluation of a wireless wearable tongue–computer interface by individuals with high-level spinal cord injuries

    PubMed Central

    Huo, Xueliang; Ghovanloo, Maysam

    2010-01-01

    The tongue drive system (TDS) is an unobtrusive, minimally invasive, wearable and wireless tongue–computer interface (TCI), which can infer its users' intentions, represented in their volitional tongue movements, by detecting the position of a small permanent magnetic tracer attached to the users' tongues. Any specific tongue movements can be translated into user-defined commands and used to access and control various devices in the users' environments. The latest external TDS (eTDS) prototype is built on a wireless headphone and interfaced to a laptop PC and a powered wheelchair. Using customized sensor signal processing algorithms and graphical user interface, the eTDS performance was evaluated by 13 naive subjects with high-level spinal cord injuries (C2–C5) at the Shepherd Center in Atlanta, GA. Results of the human trial show that an average information transfer rate of 95 bits/min was achieved for computer access with 82% accuracy. This information transfer rate is about two times higher than the EEG-based BCIs that are tested on human subjects. It was also demonstrated that the subjects had immediate and full control over the powered wheelchair to the extent that they were able to perform complex wheelchair navigation tasks, such as driving through an obstacle course. PMID:20332552

  6. Applications of cortical signals to neuroprosthetic control: a critical review.

    PubMed

    Lauer, R T; Peckham, P H; Kilgore, K L; Heetderks, W J

    2000-06-01

    Cortical signals might provide a potential means of interfacing with a neuroprosthesis. Guidelines regarding the necessary control features in terms of both performance characteristics and user requirements are presented, and their implications for the design of a first generation cortical control interface for a neuroprosthesis are discussed.

  7. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  8. Novel user interface design for medication reconciliation: an evaluation of Twinlist.

    PubMed

    Plaisant, Catherine; Wu, Johnny; Hettinger, A Zach; Powsner, Seth; Shneiderman, Ben

    2015-03-01

    The primary objective was to evaluate time, number of interface actions, and accuracy on medication reconciliation tasks using a novel user interface (Twinlist, which lays out the medications in five columns based on similarity and uses animation to introduce the grouping - www.cs.umd.edu/hcil/sharp/twinlist) compared to a Control interface (where medications are presented side by side in two columns). A secondary objective was to assess participant agreement with statements regarding clarity and utility and to elicit comparisons. A 1 × 2 within-subjects experimental design was used with interface (Twinlist or Control) as an independent variable; time, number of clicks, scrolls, and errors were used as dependent variables. Participants were practicing medical providers with experience performing medication reconciliation but no experience with Twinlist. They reconciled two cases in each interface (in a counterbalanced order), then provided feedback on the design of the interface. Twenty medical providers participated in the study for a total of 80 trials. The trials using Twinlist were statistically significantly faster (18%), with fewer clicks (40%) and scrolls (60%). Serious errors were noted 12 and 31 times in Twinlist and Control trials, respectively. Trials using Twinlist were faster and more accurate. Subjectively, participants rated Twinlist more favorably than Control. They valued the novel layout of the drugs, but indicated that the included animation would be valuable for novices, but not necessarily for advanced users. Additional feedback from participants provides guidance for further development and clinical implementations. Cognitive support of medication reconciliation through interface design can significantly improve performance and safety. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  10. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons.

    PubMed

    N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo

    2017-08-01

    Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.

  11. Bringing Control System User Interfaces to the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xihui; Kasemir, Kay

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less

  12. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  13. Auto-steering apparatus and method

    DOEpatents

    McKay, Mark D.; Anderson, Matthew O.

    2007-03-13

    A vehicular guidance method involves providing a user interface using which data can be input to establish a contour for a vehicle to follow, the user interface further configured to receive information from a differential global positioning system (DGPS), determining cross track and offset data using information received from the DGPS, generating control values, using at least vehicular kinematics, the cross track, and the offset data, and providing an output to control steering of the vehicle, using the control values, in a direction to follow the established contour while attempting to minimize the cross track and the offset data.

  14. Haptic-STM: a human-in-the-loop interface to a scanning tunneling microscope.

    PubMed

    Perdigão, Luís M A; Saywell, Alex

    2011-07-01

    The operation of a haptic device interfaced with a scanning tunneling microscope (STM) is presented here. The user moves the STM tip in three dimensions by means of a stylus attached to the haptic instrument. The tunneling current measured by the STM is converted to a vertical force, applied to the stylus and felt by the user, with the user being incorporated into the feedback loop that controls the tip-surface distance. A haptic-STM interface of this nature allows the user to feel atomic features on the surface and facilitates the tactile manipulation of the adsorbate/substrate system. The operation of this device is demonstrated via the room temperature STM imaging of C(60) molecules adsorbed on an Au(111) surface in ultra-high vacuum.

  15. Neuroprosthetic Decoder Training as Imitation Learning.

    PubMed

    Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P

    2016-05-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  16. The Cortex project A quasi-real-time information system to build control systems for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Barillere, R.; Cabel, H.; Chan, B.; Goulas, I.; Le Goff, J. M.; Vinot, L.; Willmott, C.; Milcent, H.; Huuskonen, P.

    1994-12-01

    The Cortex control information system framework is being developed at CERN. It offers basic functions to allow the sharing of information, control and analysis functions; it presents a uniform human interface for such information and functions; it permits upgrades and additions without code modification and it is sufficiently generic to allow its use by most of the existing or future control systems at CERN. Services will include standard interfaces to user-supplied functions, analysis, archive and event management. Cortex does not attempt to carry out the direct data acquisition or control of the devices; these are activities which are highly specific to the application and are best done by commercial systems or user-written programs. Instead, Cortex integrates these application-specific pieces and supports them by supplying other commonly needed facilities such as collaboration, analysis, diagnosis and user assistance.

  17. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

    PubMed Central

    Höhne, Johannes; Tangermann, Michael

    2014-01-01

    Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978

  18. Adjustably Autonomous Multi-agent Plan Execution with an Internal Spacecraft Free-Flying Robot Prototype

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Nicewarner, Keith

    2006-01-01

    We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.

  19. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  20. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  1. Wearable wireless User Interface Cursor-Controller (UIC-C).

    PubMed

    Marjanovic, Nicholas; Kerr, Kevin; Aranda, Ricardo; Hickey, Richard; Esmailbeigi, Hananeh

    2017-07-01

    Controlling a computer or a smartphone's cursor allows the user to access a world full of information. For millions of people with limited upper extremities motor function, controlling the cursor becomes profoundly difficult. Our team has developed the User Interface Cursor-Controller (UIC-C) to assist the impaired individuals in regaining control over the cursor. The UIC-C is a hands-free device that utilizes the tongue muscle to control the cursor movements. The entire device is housed inside a subject specific retainer. The user maneuvers the cursor by manipulating a joystick imbedded inside the retainer via their tongue. The joystick movement commands are sent to an electronic device via a Bluetooth connection. The device is readily recognizable as a cursor controller by any Bluetooth enabled electronic device. The device testing results have shown that the time it takes the user to control the cursor accurately via the UIC-C is about three times longer than a standard computer mouse controlled via the hand. The device does not require any permanent modifications to the body; therefore, it could be used during the period of acute rehabilitation of the hands. With the development of modern smart homes, and enhancement electronics controlled by the computer, UIC-C could be integrated into a system that enables individuals with permanent impairment, the ability to control the cursor. In conclusion, the UIC-C device is designed with the goal of allowing the user to accurately control a cursor during the periods of either acute or permanent upper extremities impairment.

  2. Investigation on sense of control parameters for joystick interface in remote operated container crane application

    NASA Astrophysics Data System (ADS)

    Abdullah, U. N. N.; Handroos, H.

    2017-09-01

    Introduction: This paper presents the study of sense of control parameters to improve the lack of direct motion feeling through remote operated container crane station (ROCCS) joystick interface. The investigations of the parameters in this study are important to develop the engineering parameters related to the sense of control goal in the next design process. Methodology: Structured interviews and observations were conducted to obtain the user experience data from thirteen remote container crane operators from two international terminals. Then, interview analysis, task analysis, activity analysis and time line analysis were conducted to compare and contrast the results from interviews and observations. Results: Four experience parameters were identified to support the sense of control goal in the later design improvement of the ROCC joystick interface. The significance of difficulties to control, unsynchronized movements, facilitate in control and decision making in unexpected situation as parameters to the sense of control goal were validated by' feedbacks from operators as well as analysis. Contribution: This study provides feedback directly from end users towards developing a sustainable control interface for ROCCS in specific and remote operated off-road vehicles in general.

  3. Research the mobile phone operation interfaces for vision-impairment.

    PubMed

    Yao, Yen-Ting; Leung, Cherng-Yee

    2012-01-01

    Due to the vision-impaired users commonly having difficulty with mobile-phone function operations and adaption any manufacturer's user interface design, the goals for this research are established for evaluating how to improve for them the function operation convenience and user interfaces of either mobile phones or electronic appliances in the market currently. After applying collecting back 30 effective questionnaires from 30 vision-impairment, the comments have been concluded from this research include: (1) All mobile phone manufactures commonly ignorant of the vision-impairment difficulty with operating mobile phone user interfaces; (2) The vision-impairment preferential with audio alert signals; (3) The vision-impairment incapable of mobile-phone procurement independently unless with assistance from others; (4) Preferential with adding touch-usage interface design by the vision-impairment; in contrast with the least requirement for such functions as braille, enlarging keystroke size and diversifying-function control panel. With exploring the vision-impairment's necessary improvements and obstacles for mobile phone interface operation, this research is established with goals for offering reference possibly applied in electronic appliance design and . Hopefully, the analysis results of this research could be used as data references for designing electronic and high-tech products and promoting more usage convenience for those vision-impaired.

  4. Skills based evaluation of alternative input methods to command a semi-autonomous electric wheelchair.

    PubMed

    Rojas, Mario; Ponce, Pedro; Molina, Arturo

    2016-08-01

    This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.

  5. Transportable telemetry workstation

    NASA Technical Reports Server (NTRS)

    Collins, Aaron S.

    1989-01-01

    The goal was to complete the design of a prototype for a Transportable Telemetry Workstation (TTW). The Macintosh 2 is used to provide a low-cost system which can house real-time cards mounted on the NuBus inside the Macintosh 2 plus provide a standardized user interface on the Macintosh 2 console. Prior to a telemetry run, the user will be able to configure his real-time telemetry processing functions from the Macintosh 2 console. During a telemetry run, the real-time cards will store the telemetry data directly on a hard disk while permitting viewing of the data cards on the Macintosh 2 console on various selectable formats. The user will view the cards in terms of the functions they perform and the selectable paths through the cards, it is not required to become involved directly in hardware issue except in terms of the functional configuration of the system components. The TTW will accept telemetry data from an RS422 serial input data bus, pass it through a frame synchronizer card and on to a real time controller card via a telemetry backplane bus. The controller card will then route the data to a hard disk through a SCSI interface, and/or to a user interface on the Macintosh 2 console by way of the Macintosh 2 NuBus. The three major components to be designed, therefore, are the TTW Controller Card, the TTW Synchronizer Card, and the NuBus/Macintosh 2 User Interface. Design and prototyping of this state-of-the-art, transportable, low-cost, easy-to-use multiprocessor telemetry system is continuing. Other functions are planned for the future.

  6. Transportable telemetry workstation

    NASA Astrophysics Data System (ADS)

    Collins, Aaron S.

    1989-09-01

    The goal was to complete the design of a prototype for a Transportable Telemetry Workstation (TTW). The Macintosh 2 is used to provide a low-cost system which can house real-time cards mounted on the NuBus inside the Macintosh 2 plus provide a standardized user interface on the Macintosh 2 console. Prior to a telemetry run, the user will be able to configure his real-time telemetry processing functions from the Macintosh 2 console. During a telemetry run, the real-time cards will store the telemetry data directly on a hard disk while permitting viewing of the data cards on the Macintosh 2 console on various selectable formats. The user will view the cards in terms of the functions they perform and the selectable paths through the cards, it is not required to become involved directly in hardware issue except in terms of the functional configuration of the system components. The TTW will accept telemetry data from an RS422 serial input data bus, pass it through a frame synchronizer card and on to a real time controller card via a telemetry backplane bus. The controller card will then route the data to a hard disk through a SCSI interface, and/or to a user interface on the Macintosh 2 console by way of the Macintosh 2 NuBus. The three major components to be designed, therefore, are the TTW Controller Card, the TTW Synchronizer Card, and the NuBus/Macintosh 2 User Interface. Design and prototyping of this state-of-the-art, transportable, low-cost, easy-to-use multiprocessor telemetry system is continuing. Other functions are planned for the future.

  7. Usability Issues in the User Interfaces of Privacy-Enhancing Technologies

    ERIC Educational Resources Information Center

    LaTouche, Lerone W.

    2013-01-01

    Privacy on the Internet has become one of the leading concerns for Internet users. These users are not wrong in their concerns if personally identifiable information is not protected and under their control. To minimize the collection of Internet users' personal information and help solve the problem of online privacy, a number of…

  8. A Graphical User-Interface for Propulsion System Analysis

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Ryall, Kathleen

    1992-01-01

    NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.

  9. A graphical user-interface for propulsion system analysis

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Ryall, Kathleen

    1993-01-01

    NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.

  10. Is There a Chance for a Standardised User Interface?

    ERIC Educational Resources Information Center

    Fletcher, Liz

    1993-01-01

    Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)

  11. Topological Galleries: A High Level User Interface for Topology Controlled Volume Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacCarthy, Brian; Carr, Hamish; Weber, Gunther H.

    2011-06-30

    Existing topological interfaces to volume rendering are limited by their reliance on sophisticated knowledge of topology by the user. We extend previous work by describing topological galleries, an interface for novice users that is based on the design galleries approach. We report three contributions: an interface based on hierarchical thumbnail galleries to display the containment relationships between topologically identifiable features, the use of the pruning hierarchy instead of branch decomposition for contour tree simplification, and drag-and-drop transfer function assignment for individual components. Initial results suggest that this approach suffers from limitations due to rapid drop-off of feature size in themore » pruning hierarchy. We explore these limitations by providing statistics of feature size as function of depth in the pruning hierarchy of the contour tree.« less

  12. Functional description of a command and control language tutor

    NASA Technical Reports Server (NTRS)

    Elke, David R.; Seamster, Thomas L.; Truszkowski, Walter

    1990-01-01

    The status of an ongoing project to explore the application of Intelligent Tutoring System (ITS) technology to NASA command and control languages is described. The primary objective of the current phase of the project is to develop a user interface for an ITS to assist NASA control center personnel in learning Systems Test and Operations Language (STOL). Although this ITS will be developed for Gamma Ray Observatory operators, it will be designed with sufficient flexibility so that its modules may serve as an ITS for other control languages such as the User Interface Language (UIL). The focus of this phase is to develop at least one other form of STOL representation to complement the operational STOL interface. Such an alternative representation would be adaptively employed during the tutoring session to facilitate the learning process. This is a key feature of this ITS which distinguishes it from a simulator that is only capable of representing the operational environment.

  13. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  14. Comparing Text-based and Graphic User Interfaces for Novice and Expert Users

    PubMed Central

    Chen, Jung-Wei; Zhang, Jiajie

    2007-01-01

    Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI. PMID:18693811

  15. Comparing Text-based and Graphic User Interfaces for novice and expert users.

    PubMed

    Chen, Jung-Wei; Zhang, Jiajie

    2007-10-11

    Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI.

  16. A self-paced motor imagery based brain-computer interface for robotic wheelchair control.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng

    2011-10-01

    This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.

  17. How much control is enough? Influence of unreliable input on user experience.

    PubMed

    van de Laar, Bram; Plass-Oude Bos, Danny; Reuderink, Boris; Poel, Mannes; Nijholt, Anton

    2013-12-01

    Brain–computer interfaces (BCI) provide a valuable new input modality within human–computer interaction systems. However, like other body-based inputs such as gesture or gaze based systems, the system recognition of input commands is still far from perfect. This raises important questions, such as what level of control should such an interface be able to provide. What is the relationship between actual and perceived control? And in the case of applications for entertainment in which fun is an important part of user experience, should we even aim for the highest level of control, or is the optimum elsewhere? In this paper, we evaluate whether we can modulate the amount of control and if a game can be fun with less than perfect control. In the experiment users (n = 158) played a simple game in which a hamster has to be guided to the exit of a maze. The amount of control the user has over the hamster is varied. The variation of control through confusion matrices makes it possible to simulate the experience of using a BCI, while using the traditional keyboard for input. After each session the user completed a short questionnaire on user experience and perceived control. Analysis of the data showed that the perceived control of the user could largely be explained by the amount of control in the respective session. As expected, user frustration decreases with increasing control. Moreover, the results indicate that the relation between fun and control is not linear. Although at lower levels of control fun does increase with improved control, the level of fun drops just before perfect control is reached (with an optimum around 96%). This poses new insights for developers of games who want to incorporate some form of BCI or other modality with unreliable input in their game: for creating a fun game, unreliable input can be used to create a challenge for the user.

  18. Foreign Language Teaching and the Computer.

    ERIC Educational Resources Information Center

    Garrett, Nina; Hart, Robert S.

    1988-01-01

    A review of the APPLE MACINTOSH-compatible software "Conjugate! Spanish," intended to drill Spanish verb forms, points out its strengths (error feedback, user manual, user interface, and feature control) and its weaknesses (pedagogical approach). (CB)

  19. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  20. CE-SAM: a conversational interface for ISR mission support

    NASA Astrophysics Data System (ADS)

    Pizzocaro, Diego; Parizas, Christos; Preece, Alun; Braines, Dave; Mott, David; Bakdash, Jonathan Z.

    2013-05-01

    There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be "owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.

  1. XAL Application Framework and Bricks GUI Builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelaia II, Tom

    2007-01-01

    The XAL [1] Application Framework is a framework for rapidly developing document based Java applications with a common look and feel along with many built-in user interface behaviors. The Bricks GUI builder consists of a modern application and framework for rapidly building user interfaces in support of true Model-View-Controller (MVC) compliant Java applications. Bricks and the XAL Application Framework allow developers to rapidly create quality applications.

  2. Autonomous Robot Control via Autonomy Levels (ARCAL)

    DTIC Science & Technology

    2015-08-21

    same simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics

  3. Autonomous Robot Control via Autonomy Levels (ARCAL)

    DTIC Science & Technology

    2015-06-25

    simulated objects. VRF includes a detailed graphical user interface (GUI) front end that subscribes to objects over HLA and renders them, along...forces.html 8. Gao, H., LI, Z., and Zhao, X., "The User -defined and Func- tion-strengthened for CGF of VR -Forces [J]." Computer Simulation, vol. 6, 2007...info Scout vehicle commands Scout vehicle Sensor measurements Mission vehicle Mission goals Operator interface Scout belief update Logistics executive

  4. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  5. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  6. Asynchronous P300-based brain-computer interface to control a virtual environment: initial tests on end users.

    PubMed

    Aloise, Fabio; Schettini, Francesca; Aricò, Pietro; Salinari, Serenella; Guger, Christoph; Rinsma, Johanna; Aiello, Marco; Mattia, Donatella; Cincotti, Febo

    2011-10-01

    Motor disability and/or ageing can prevent individuals from fully enjoying home facilities, thus worsening their quality of life. Advances in the field of accessible user interfaces for domotic appliances can represent a valuable way to improve the independence of these persons. An asynchronous P300-based Brain-Computer Interface (BCI) system was recently validated with the participation of healthy young volunteers for environmental control. In this study, the asynchronous P300-based BCI for the interaction with a virtual home environment was tested with the participation of potential end-users (clients of a Frisian home care organization) with limited autonomy due to ageing and/or motor disabilities. System testing revealed that the minimum number of stimulation sequences needed to achieve correct classification had a higher intra-subject variability in potential end-users with respect to what was previously observed in young controls. Here we show that the asynchronous modality performed significantly better as compared to the synchronous mode in continuously adapting its speed to the users' state. Furthermore, the asynchronous system modality confirmed its reliability in avoiding misclassifications and false positives, as previously shown in young healthy subjects. The asynchronous modality may contribute to filling the usability gap between BCI systems and traditional input devices, representing an important step towards their use in the activities of daily living.

  7. Brain-computer interface technology: a review of the Second International Meeting.

    PubMed

    Vaughan, Theresa M; Heetderks, William J; Trejo, Leonard J; Rymer, William Z; Weinrich, Michael; Moore, Melody M; Kübler, Andrea; Dobkin, Bruce H; Birbaumer, Niels; Donchin, Emanuel; Wolpaw, Elizabeth Winter; Wolpaw, Jonathan R

    2003-06-01

    This paper summarizes the Brain-Computer Interfaces for Communication and Control, The Second International Meeting, held in Rensselaerville, NY, in June 2002. Sponsored by the National Institutes of Health and organized by the Wadsworth Center of the New York State Department of Health, the meeting addressed current work and future plans in brain-computer interface (BCI) research. Ninety-two researchers representing 38 different research groups from the United States, Canada, Europe, and China participated. The BCIs discussed at the meeting use electroencephalographic activity recorded from the scalp or single-neuron activity recorded within cortex to control cursor movement, select letters or icons, or operate neuroprostheses. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI that recognizes the commands contained in the input and expresses them in device control. Current BCIs have maximum information transfer rates of up to 25 b/min. Achievement of greater speed and accuracy requires improvements in signal acquisition and processing, in translation algorithms, and in user training. These improvements depend on interdisciplinary cooperation among neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective criteria for evaluating alternative methods. The practical use of BCI technology will be determined by the development of appropriate applications and identification of appropriate user groups, and will require careful attention to the needs and desires of individual users.

  8. Designing User-Computer Dialogues: Basic Principles and Guidelines.

    ERIC Educational Resources Information Center

    Harrell, Thomas H.

    This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…

  9. Upper Body-Based Power Wheelchair Control Interface for Individuals with Tetraplegia

    PubMed Central

    Thorp, Elias B.; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica; Pierella, Camilla; Roth, Elliot J.; Gonzalez, Ismael Seanez; Mussa-Ivaldi, Ferdinando A.

    2016-01-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user’s residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional controls commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control the power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control. PMID:26054071

  10. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Improving 3D Character Posing with a Gestural Interface.

    PubMed

    Kyto, Mikko; Dhinakaran, Krupakar; Martikainen, Aki; Hamalainen, Perttu

    2017-01-01

    The most time-consuming part of character animation is 3D character posing. Posing using a mouse is a slow and tedious task that involves sequences of selecting on-screen control handles and manipulating the handles to adjust character parameters, such as joint rotations and end effector positions. Thus, various 3D user interfaces have been proposed to make animating easier, but they typically provide less accuracy. The proposed interface combines a mouse with the Leap Motion device to provide 3D input. A usability study showed that users preferred the Leap Motion over a mouse as a 3D gestural input device. The Leap Motion drastically decreased the number of required operations and the task completion time, especially for novice users.

  12. bioWidgets: data interaction components for genomics.

    PubMed

    Fischer, S; Crabtree, J; Brunk, B; Gibson, M; Overton, G C

    1999-10-01

    The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.

  13. Connected cane: Tactile button input for controlling gestures of iOS voiceover embedded in a white cane.

    PubMed

    Batterman, Jared M; Martin, Vincent F; Yeung, Derek; Walker, Bruce N

    2018-01-01

    Accessibility of assistive consumer devices is an emerging research area with potential to benefit both users with and without visual impairments. In this article, we discuss the research and evaluation of using a tactile button interface to control an iOS device's native VoiceOver Gesture navigations (Apple Accessibility, 2014). This research effort identified potential safety and accessibility issues for users trying to interact and control their touchscreen mobile iOS devices while traveling independently. Furthermore, this article discusses the participatory design process in creating a solution that aims to solve issues in utilizing a tactile button interface in a novel device. The overall goal of this study is to enable visually impaired white cane users to access their mobile iOS device's capabilities navigation aids more safely and efficiently on the go.

  14. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (HP9000 SERIES 300/400 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  15. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  16. Electronic processing and control system with programmable hardware

    NASA Technical Reports Server (NTRS)

    Alkalaj, Leon (Inventor); Fang, Wai-Chi (Inventor); Newell, Michael A. (Inventor)

    1998-01-01

    A computer system with reprogrammable hardware allowing dynamically allocating hardware resources for different functions and adaptability for different processors and different operating platforms. All hardware resources are physically partitioned into system-user hardware and application-user hardware depending on the specific operation requirements. A reprogrammable interface preferably interconnects the system-user hardware and application-user hardware.

  17. Weintek interfaces for controlling the position of a robotic arm

    NASA Astrophysics Data System (ADS)

    Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.

    2016-08-01

    The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.

  18. TSAFE Interface Control Document v 2.0

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    2013-01-01

    This document specifies the data interface for TSAFE, the Tactical Separation-Assured Flight Environment. TSAFE is a research prototype of a software application program for alerting air traffic controllers to imminent conflicts in enroute airspace. It is intended for Air Route Traffic Control Centers ("Centers") in the U.S. National Airspace System. It predicts trajectories for approximately 3 minutes into the future, searches for conflicts, and sends data about predicted conflicts to the client, which uses the data to alert an air traffic controller of conflicts. TSAFE itself does not provide a graphical user interface.

  19. Experiments in cooperative manipulation: A system perspective

    NASA Technical Reports Server (NTRS)

    Schneider, Stanley A.; Cannon, Robert H., Jr.

    1989-01-01

    In addition to cooperative dynamic control, the system incorporates real time vision feedback, a novel programming technique, and a graphical high level user interface. By focusing on the vertical integration problem, not only these subsystems are examined, but also their interfaces and interactions. The control system implements a multi-level hierarchical structure; the techniques developed for operator input, strategic command, and cooperative dynamic control are presented. At the highest level, a mouse-based graphical user interface allows an operator to direct the activities of the system. Strategic command is provided by a table-driven finite state machine; this methodology provides a powerful yet flexible technique for managing the concurrent system interactions. The dynamic controller implements object impedance control; an extension of Nevill Hogan's impedance control concept to cooperative arm manipulation of a single object. Experimental results are presented, showing the system locating and identifying a moving object catching it, and performing a simple cooperative assembly. Results from dynamic control experiments are also presented, showing the controller's excellent dynamic trajectory tracking performance, while also permitting control of environmental contact force.

  20. An adaptive brain actuated system for augmenting rehabilitation

    PubMed Central

    Roset, Scott A.; Gant, Katie; Prasad, Abhishek; Sanchez, Justin C.

    2014-01-01

    For people living with paralysis, restoration of hand function remains the top priority because it leads to independence and improvement in quality of life. In approaches to restore hand and arm function, a goal is to better engage voluntary control and counteract maladaptive brain reorganization that results from non-use. Standard rehabilitation augmented with developments from the study of brain-computer interfaces could provide a combined therapy approach for motor cortex rehabilitation and to alleviate motor impairments. In this paper, an adaptive brain-computer interface system intended for application to control a functional electrical stimulation (FES) device is developed as an experimental test bed for augmenting rehabilitation with a brain-computer interface. The system's performance is improved throughout rehabilitation by passive user feedback and reinforcement learning. By continuously adapting to the user's brain activity, similar adaptive systems could be used to support clinical brain-computer interface neurorehabilitation over multiple days. PMID:25565945

  1. Experiment on a novel user input for computer interface utilizing tongue input for the severely disabled.

    PubMed

    Kencana, Andy Prima; Heng, John

    2008-11-01

    This paper introduces a novel passive tongue control and tracking device. The device is intended to be used by the severely disabled or quadriplegic person. The main focus of this device when compared to the other existing tongue tracking devices is that the sensor employed is passive which means it requires no powered electrical sensor to be inserted into the user's mouth and hence no trailing wires. This haptic interface device employs the use of inductive sensors to track the position of the user's tongue. The device is able perform two main PC functions that of the keyboard and mouse function. The results show that this device allows the severely disabled person to have some control in his environment, such as to turn on and off or control daily electrical devices or appliances; or to be used as a viable PC Human Computer Interface (HCI) by tongue control. The operating principle and set-up of such a novel passive tongue HCI has been established with successful laboratory trials and experiments. Further clinical trials will be required to test out the device on disabled persons before it is ready for future commercial development.

  2. User interface for a tele-operated robotic hand system

    DOEpatents

    Crawford, Anthony L

    2015-03-24

    Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.

  3. [The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].

    PubMed

    Ganin, I P; Kaplan, A Ia

    2014-01-01

    The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.

  4. Development of the User Interface for AIR-Spec

    NASA Astrophysics Data System (ADS)

    Cervantes Alcala, E.; Guth, G.; Fedeler, S.; Samra, J.; Cheimets, P.; DeLuca, E.; Golub, L.

    2016-12-01

    The airborne infrared spectrometer (AIR-Spec) is an imaging spectrometer that will observe the solar corona during the 2017 total solar eclipse. This eclipse will provide a unique opportunity to observe infrared emission lines in the corona. Five spectral lines are of particular interest because they may eventually be used to measure the coronal magnetic field. To avoid infrared absorption from atmospheric water vapor, AIR-Spec will be placed on an NSF Gulfstream aircraft flying above 14.9 km. AIR-Spec must be capable of taking stable images while the plane moves. The instrument includes an image stabilization system, which uses fiber-optic gyroscopes to determine platform rotation, GPS to calculate the ephemeris of the sun, and a voltage-driven mirror to correct the line of sight. An operator monitors a white light image of the eclipse and manually corrects for residual drift. The image stabilization calculation is performed by a programmable automatic controller (PAC), which interfaces with the gyroscopes and mirror controller. The operator interfaces with a separate computer, which acquires images and computes the solar ephemeris. To ensure image stabilization is successful, a human machine interface (HMI) was developed to allow connection between the client and PAC. In order to make control of the instruments user friendly during the short eclipse observation, a graphical user interface (GUI) was also created. The GUI's functionality includes turning image stabilization on and off, allowing the user to input information about the geometric setup, calculating the solar ephemeris, refining estimates of the initial aircraft attitude, and storing data from the PAC on the operator's computer. It also displays time, location, attitude, ephemeris, gyro rates and mirror angles.

  5. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  6. Smart homes to improve the quality of life for all.

    PubMed

    Aiello, Marco; Aloise, Fabio; Baldoni, Roberto; Cincotti, Febo; Guger, Christoph; Lazovik, Alexander; Mecella, Massimo; Pucci, Paolo; Rinsma, Johanna; Santucci, Giuseppe; Taglieri, Massimiliano

    2011-01-01

    A home is smart when, being aware of its own state and that of its users, is capable of controlling itself in order to support the user wishes and thus improving their quality of life. This holds both for users with special needs and for those with ordinary domestic needs. In this paper, we overview the Smart Homes for All project which represents the current state of the art with respect to software control and user interfaces in the smart homes arena.

  7. An intelligent multi-media human-computer dialogue system

    NASA Technical Reports Server (NTRS)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  8. Use of natural user interfaces in water simulations

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; van Dam, A.; Jagers, B.

    2013-12-01

    Conventional graphical user interfaces, used to edit input and present results of earth science models, have seen little innovation for the past two decades. In most cases model data is presented and edited using 2D projections even when working with 3D data. The emergence of 3D motion sensing technologies, such as Microsoft Kinect and LEAP Motion, opens new possibilities for user interaction by adding more degrees of freedom compared to a classical way using mouse and keyboard. Here we investigate how interaction with hydrodynamic numerical models can be improved using these new technologies. Our research hypothesis (H1) states that properly designed 3D graphical user interface paired with the 3D motion sensor can significantly reduce the time required to setup and use numerical models. In this work we have used a LEAP motion controller combined with a shallow water flow model engine D-Flow Flexible Mesh. Interacting with numerical model using hands

  9. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…

  10. Diverse applications of advanced man-telerobot interfaces

    NASA Technical Reports Server (NTRS)

    Mcaffee, Douglas A.

    1991-01-01

    Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.

  11. Mechanical Engineering Design Project report: Enabler control systems

    NASA Technical Reports Server (NTRS)

    Cullen, Christian; Delvecchio, Dave; Scarborough, Alan; Havics, Andrew A.

    1992-01-01

    The Controls Group was assigned the responsibility for designing the Enabler's control system. The requirement for the design was that the control system must provide a simple user interface to control the boom articulation joints, chassis articulation joints, and the wheel drive. The system required controlling hydraulic motors on the Enabler by implementing 8-bit microprocessor boards. In addition, feedback to evaluate positions and velocities must be interfaced to provide the operator with confirmation as well as control.

  12. An intelligent interface for satellite operations: Your Orbit Determination Assistant (YODA)

    NASA Technical Reports Server (NTRS)

    Schur, Anne

    1988-01-01

    An intelligent interface is often characterized by the ability to adapt evaluation criteria as the environment and user goals change. Some factors that impact these adaptations are redefinition of task goals and, hence, user requirements; time criticality; and system status. To implement adaptations affected by these factors, a new set of capabilities must be incorporated into the human-computer interface design. These capabilities include: (1) dynamic update and removal of control states based on user inputs, (2) generation and removal of logical dependencies as change occurs, (3) uniform and smooth interfacing to numerous processes, databases, and expert systems, and (4) unobtrusive on-line assistance to users of concepts were applied and incorporated into a human-computer interface using artificial intelligence techniques to create a prototype expert system, Your Orbit Determination Assistant (YODA). YODA is a smart interface that supports, in real teime, orbit analysts who must determine the location of a satellite during the station acquisition phase of a mission. Also described is the integration of four knowledge sources required to support the orbit determination assistant: orbital mechanics, spacecraft specifications, characteristics of the mission support software, and orbit analyst experience. This initial effort is continuing with expansion of YODA's capabilities, including evaluation of results of the orbit determination task.

  13. Brain-controlled applications using dynamic P300 speller matrices.

    PubMed

    Halder, Sebastian; Pinegger, Andreas; Käthner, Ivo; Wriessnegger, Selina C; Faller, Josef; Pires Antunes, João B; Müller-Putz, Gernot R; Kübler, Andrea

    2015-01-01

    Access to the world wide web and multimedia content is an important aspect of life. We present a web browser and a multimedia user interface adapted for control with a brain-computer interface (BCI) which can be used by severely motor impaired persons. The web browser dynamically determines the most efficient P300 BCI matrix size to select the links on the current website. This enables control of the web browser with fewer commands and smaller matrices. The multimedia player was based on an existing software. Both applications were evaluated with a sample of ten healthy participants and three end-users. All participants used a visual P300 BCI with face-stimuli for control. The healthy participants completed the multimedia player task with 90% accuracy and the web browsing task with 85% accuracy. The end-users completed the tasks with 62% and 58% accuracy. All healthy participants and two out of three end-users reported that they felt to be in control of the system. In this study we presented a multimedia application and an efficient web browser implemented for control with a BCI. Both applications provide access to important areas of modern information retrieval and entertainment. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A PDP-15 to industrial-14 interface at the Lewis Research Center's cyclotron

    NASA Technical Reports Server (NTRS)

    Kebberly, F. R.; Leonard, R. F.

    1977-01-01

    An interface (hardware and software) was built which permits the loading, monitoring, and control of a digital equipment industrial-14/30 programmable controller by a PDP-15 computer. The interface utilizes the serial mode for data transfer to and from the controller, so that the required hardware is essentially that of a teletype unit except for the speed of transmission. Software described here permits the user to load binary paper tape, read or load individual controller memory locations, and if desired turn controller outputs on and off directly from the computer.

  15. Matching brain-machine interface performance to space applications.

    PubMed

    Citi, Luca; Tonet, Oliver; Marinelli, Martina

    2009-01-01

    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.

  16. Goal selection versus process control while learning to use a brain-computer interface

    NASA Astrophysics Data System (ADS)

    Royer, Audrey S.; Rose, Minn L.; He, Bin

    2011-06-01

    A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.

  17. Force Control and Nonlinear Master-Slave Force Profile to Manage an Admittance Type Multi-Fingered Haptic User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony L. Crawford

    2012-08-01

    Natural movements and force feedback are important elements in using teleoperated equipment if complex and speedy manipulation tasks are to be accomplished in remote and/or hazardous environments, such as hot cells, glove boxes, decommissioning, explosives disarmament, and space to name a few. In order to achieve this end the research presented in this paper has developed an admittance type exoskeleton like multi-fingered haptic hand user interface that secures the user’s palm and provides 3-dimensional force feedback to the user’s fingertips. Atypical to conventional haptic hand user interfaces that limit themselves to integrating the human hand’s characteristics just into the system’smore » mechanical design this system also perpetuates that inspiration into the designed user interface’s controller. This is achieved by manifesting the property differences of manipulation and grasping activities as they pertain to the human hand into a nonlinear master-slave force relationship. The results presented in this paper show that the admittance-type system has sufficient bandwidth that it appears nearly transparent to the user when the user is in free motion and when the system is subjected to a manipulation task, increased performance is achieved using the nonlinear force relationship compared to the traditional linear scaling techniques implemented in the vast majority of systems.« less

  18. Advanced Query and Data Mining Capabilities for MaROS

    NASA Technical Reports Server (NTRS)

    Wang, Paul; Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2013-01-01

    The Mars Relay Operational Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay network. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. As part of MaROS, the innovators have developed and implemented a feature set that operates on several levels of the software architecture. This new feature is an advanced querying capability through either the Web-based user interface, or through a back-end REST interface to access all of the data gathered from the network. This software is not meant to replace the REST interface, but to augment and expand the range of available data. The current REST interface provides specific data that is used by the MaROS Web application to display and visualize the information; however, the returned information from the REST interface has typically been pre-processed to return only a subset of the entire information within the repository, particularly only the information that is of interest to the GUI (graphical user interface). The new, advanced query and data mining capabilities allow users to retrieve the raw data and/or to perform their own data processing. The query language used to access the repository is a restricted subset of the structured query language (SQL) that can be built safely from the Web user interface, or entered as freeform SQL by a user. The results are returned in a CSV (Comma Separated Values) format for easy exporting to third party tools and applications that can be used for data mining or user-defined visualization and interpretation. This is the first time that a service is capable of providing access to all cross-project relay data from a single Web resource. Because MaROS contains the data for a variety of missions from the Mars network, which span both NASA and ESA, the software also establishes an access control list (ACL) on each data record in the database repository to enforce user access permissions through a multilayered approach.

  19. Using Eye Movement to Control a Computer: A Design for a Lightweight Electro-Oculogram Electrode Array and Computer Interface

    PubMed Central

    Iáñez, Eduardo; Azorin, Jose M.; Perez-Vidal, Carlos

    2013-01-01

    This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. PMID:23843986

  20. Vision based interface system for hands free control of an Intelligent Wheelchair.

    PubMed

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-08-06

    Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people.

  1. Network and user interface for PAT DOME virtual motion environment system

    NASA Technical Reports Server (NTRS)

    Worthington, J. W.; Duncan, K. M.; Crosier, W. G.

    1993-01-01

    The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.

  2. Simulation and experimental studies of operators` decision styles and crew composition while using an ecological and traditional user interface for the control room of a nuclear power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meshkati, N.; Buller, B.J.; Azadeh, M.A.

    1995-04-01

    The goal of this research is threefold: (1) use of the Skill-, Rule-, and Knowledge-based levels of cognitive control -- the SRK framework -- to develop an integrated information processing conceptual framework (for integration of workstation, job, and team design); (2) to evaluate the user interface component of this framework -- the Ecological display; and (3) to analyze the effect of operators` individual information processing behavior and decision styles on handling plant disturbances plus their performance on, and preference for, Traditional and Ecological user interfaces. A series of studies were conducted. In Part I, a computer simulation model and amore » mathematical model were developed. In Part II, an experiment was designed and conducted at the EBR-II plant of the Argonne National Laboratory-West in Idaho Falls, Idaho. It is concluded that: the integrated SRK-based information processing model for control room operations is superior to the conventional rule-based model; operators` individual decision styles and the combination of their styles play a significant role in effective handling of nuclear power plant disturbances; use of the Ecological interface results in significantly more accurate event diagnosis and recall of various plant parameters, faster response to plant transients, and higher ratings of subject preference; and operators` decision styles affect on both their performance and preference for the Ecological interface.« less

  3. Exoskeletal meal assistance system (EMAS II) for progressive muscle dystrophy patient.

    PubMed

    Hasegawa, Yasuhisa; Oura, Saori

    2011-01-01

    This paper introduces a 4-DOFs exoskeletal meal assistance system (EMAS II) for progressive muscle dystrophy patient. It is generally better for the patient to use his/her hands by himself in daily life because active works maintain level of residual functions, health and initiative of him/her. The EMAS II that has a new joystick-type user interface device and three-DOFs on a shoulder part is enhanced for an easier operation and more comfortable support on eating, as the succeeding model of the previous system that has two-DOFs on a shoulder. In order to control the 4-DOFs system by the simple user interface device, the EMAS II simulates upper limb motion patterns of a healthy person. The motion patterns are modeled by extracting correlations between the height of a user's wrist joint and that of the user's elbow joint at the table. Moreover, the EMAS II automatically brings user's hand up to his/her mouth or back to a table when he/she pushes a preset switch on the interface device. Therefore a user has only to control a position of his/her wrist to pick or scoop foods and then flip the switch to start automatic mode, while a height of the elbow joint is automatically controlled by the EMAS II itself. The results of experiments, where a healthy subject regarded as a muscle dystrophy patient eats a meal with EMAS II, show that the subject finished her meal in a natural way in 18 minutes 40 seconds which was within a recommended time of 30 minutes. © 2011 IEEE

  4. Temperature and melt solid interface control during crystal growth

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1990-01-01

    Findings on the adaptive control of a transparent Bridgman crystal growth furnace are summarized. The task of the process controller is to establish a user specified axial temperature profile by controlling the temperatures in eight heating zones. The furnace controller is built around a computer. Adaptive PID (Proportional Integral Derivative) and Pole Placement control algorithms are applied. The need for adaptive controller stems from the fact that the zone dynamics changes with respect to time. The controller was tested extensively on the Lead Bromide crystal growth. Several different temperature profiles and ampoule's translational rates are tried. The feasibility of solid liquid interface quantification by image processing was determined. The interface is observed by a color video camera and the image data file is processed to determine if the interface is flat, convex or concave.

  5. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors.

    PubMed

    Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah

    2015-07-09

    Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user's hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces.

  6. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  7. Graphical user interface simplifies infusion pump programming and enhances the ability to detect pump-related faults.

    PubMed

    Syroid, Noah; Liu, David; Albert, Robert; Agutter, James; Egan, Talmage D; Pace, Nathan L; Johnson, Ken B; Dowdle, Michael R; Pulsipher, Daniel; Westenskow, Dwayne R

    2012-11-01

    Drug administration errors are frequent and are often associated with the misuse of IV infusion pumps. One source of these errors may be the infusion pump's user interface. We used failure modes-and-effects analyses to identify programming errors and to guide the design of a new syringe pump user interface. We designed the new user interface to clearly show the pump's operating state simultaneously in more than 1 monitoring location. We evaluated anesthesia residents in laboratory and simulated environments on programming accuracy and error detection between the new user interface and the user interface of a commercially available infusion pump. With the new user interface, we observed the number of programming errors reduced by 81%, the number of keystrokes per task reduced from 9.2 ± 5.0 to 7.5 ± 5.5 (mean ± SD), the time required per task reduced from 18.1 ± 14.1 seconds to 10.9 ± 9.5 seconds and significantly less perceived workload. Residents detected 38 of 70 (54%) of the events with the new user interface and 37 of 70 (53%) with the existing user interface, despite no experience with the new user interface and extensive experience with the existing interface. The number of programming errors and workload were reduced partly because it took less time and fewer keystrokes to program the pump when using the new user interface. Despite minimal training, residents quickly identified preexisting infusion pump problems with the new user interface. Intuitive and easy-to-program infusion pump interfaces may reduce drug administration errors and infusion pump-related adverse events.

  8. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  9. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  10. Web-Based Interface for Command and Control of Network Sensors

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Doubleday, Joshua R.; Shams, Khawaja S.

    2010-01-01

    This software allows for the visualization and control of a network of sensors through a Web browser interface. It is currently being deployed for a network of sensors monitoring Mt. Saint Helen s volcano; however, this innovation is generic enough that it can be deployed for any type of sensor Web. From this interface, the user is able to fully control and monitor the sensor Web. This includes, but is not limited to, sending "test" commands to individual sensors in the network, monitoring for real-world events, and reacting to those events

  11. Communications interface for wireless communications headset

    NASA Technical Reports Server (NTRS)

    Culotta, Jr., Anthony Joseph (Inventor); Seibert, Marc A. (Inventor)

    2004-01-01

    A universal interface adapter circuit interfaces, for example, a wireless communications headset with any type of communications system, including those that require push-to-talk (PTT) signaling. The interface adapter is comprised of several main components, including an RF signaling receiver, a microcontroller and associated circuitry for decoding and processing the received signals, and programmable impedance matching and line interfacing circuitry for interfacing a wireless communications headset system base to a communications system. A signaling transmitter, which is preferably portable (e.g., handheld), is employed by the wireless headset user to send signals to the signaling receiver. In an embodiment of the invention directed specifically to push-to-talk (PTT) signaling, the wireless headset user presses a button on the signaling transmitter when they wish to speak. This sends a signal to the microcontroller which decodes the signal and recognizes the signal as being a PTT request. In response, the microcontroller generates a control signal that closes a switch to complete a voice connection between the headset system base and the communications system so that the user can communicate with the communications system. With this arrangement, the wireless headset can be interfaced to any communications system that requires PTT signaling, without modification of the headset device. In addition, the interface adapter can also be configured to respond to or deliver any other types of signals, such as dual-tone-multiple-frequency (DTMF) tones, and on/off hook signals. The present invention is also scalable, and permits multiple wireless users to operate independently in the same environment through use of a plurality of the interface adapters.

  12. Neural networks for simultaneous classification and parameter estimation in musical instrument control

    NASA Astrophysics Data System (ADS)

    Lee, Michael; Freed, Adrian; Wessel, David

    1992-08-01

    In this report we present our tools for prototyping adaptive user interfaces in the context of real-time musical instrument control. Characteristic of most human communication is the simultaneous use of classified events and estimated parameters. We have integrated a neural network object into the MAX language to explore adaptive user interfaces that considers these facets of human communication. By placing the neural processing in the context of a flexible real-time musical programming environment, we can rapidly prototype experiments on applications of adaptive interfaces and learning systems to musical problems. We have trained networks to recognize gestures from a Mathews radio baton, Nintendo Power GloveTM, and MIDI keyboard gestural input devices. In one experiment, a network successfully extracted classification and attribute data from gestural contours transduced by a continuous space controller, suggesting their application in the interpretation of conducting gestures and musical instrument control. We discuss network architectures, low-level features extracted for the networks to operate on, training methods, and musical applications of adaptive techniques.

  13. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    PubMed Central

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  14. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Perception of synchronization errors in haptic and visual communications

    NASA Astrophysics Data System (ADS)

    Kameyama, Seiji; Ishibashi, Yutaka

    2006-10-01

    This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.

  16. Transferring brain-computer interfaces beyond the laboratory: successful application control for motor-disabled users.

    PubMed

    Leeb, Robert; Perdikis, Serafeim; Tonin, Luca; Biasiucci, Andrea; Tavella, Michele; Creatura, Marco; Molina, Alberto; Al-Khodairy, Abdul; Carlson, Tom; Millán, José D R

    2013-10-01

    Brain-computer interfaces (BCIs) are no longer only used by healthy participants under controlled conditions in laboratory environments, but also by patients and end-users, controlling applications in their homes or clinics, without the BCI experts around. But are the technology and the field mature enough for this? Especially the successful operation of applications - like text entry systems or assistive mobility devices such as tele-presence robots - requires a good level of BCI control. How much training is needed to achieve such a level? Is it possible to train naïve end-users in 10 days to successfully control such applications? In this work, we report our experiences of training 24 motor-disabled participants at rehabilitation clinics or at the end-users' homes, without BCI experts present. We also share the lessons that we have learned through transferring BCI technologies from the lab to the user's home or clinics. The most important outcome is that 50% of the participants achieved good BCI performance and could successfully control the applications (tele-presence robot and text-entry system). In the case of the tele-presence robot the participants achieved an average performance ratio of 0.87 (max. 0.97) and for the text entry application a mean of 0.93 (max. 1.0). The lessons learned and the gathered user feedback range from pure BCI problems (technical and handling), to common communication issues among the different people involved, and issues encountered while controlling the applications. The points raised in this paper are very widely applicable and we anticipate that they might be faced similarly by other groups, if they move on to bringing the BCI technology to the end-user, to home environments and towards application prototype control. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study.

    PubMed

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-10-25

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the "Florida Secundaria" high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable).

  18. 'Fly Like This': Natural Language Interface for UAV Mission Planning

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette

    2017-01-01

    With the increasing presence of unmanned aerial vehicles (UAVs) in everyday environments, the user base of these powerful and potentially intelligent machines is expanding beyond exclusively highly trained vehicle operators to include non-expert system users. Scientists seeking to augment costly and often inflexible methods of data collection historically used are turning towards lower cost and reconfigurable UAVs. These new users require more intuitive and natural methods for UAV mission planning. This paper explores two natural language interfaces - gesture and speech - for UAV flight path generation through individual user studies. Subjects who participated in the user studies also used a mouse-based interface for a baseline comparison. Each interface allowed the user to build flight paths from a library of twelve individual trajectory segments. Individual user studies evaluated performance, efficacy, and ease-of-use of each interface using background surveys, subjective questionnaires, and observations on time and correctness. Analysis indicates that natural language interfaces are promising alternatives to traditional interfaces. The user study data collected on the efficacy and potential of each interface will be used to inform future intuitive UAV interface design for non-expert users.

  19. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control

    NASA Astrophysics Data System (ADS)

    Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash

    2012-06-01

    This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.

  20. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    PubMed

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  1. Proposal of digital interface for the system of the air conditioner's remote control: analysis of the system of feedback.

    PubMed

    da Silva de Queiroz Pierre, Raisa; Kawada, Tarô Arthur Tavares; Fontes, André Guimarães

    2012-01-01

    Develop a proposal of digital interface for the system of the remote control, that functions as support system during the manipulation of air conditioner adjusted for the users in general, from ergonomic parameters, objectifying the reduction of the problems faced for the user and improving the process. 20 people with questionnaire with both qualitative and quantitative level. Linear Method consists of a sequence of steps in which the input of one of them depends on the output from the previous one, although they are independent. The process of feedback, when necessary, must occur within each step separately.

  2. A Multifunctional Brain-Computer Interface Intended for Home Use: An Evaluation with Healthy Participants and Potential End Users with Dry and Gel-Based Electrodes

    PubMed Central

    Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M.; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea

    2017-01-01

    Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes (N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis. PMID:28588442

  3. A Multifunctional Brain-Computer Interface Intended for Home Use: An Evaluation with Healthy Participants and Potential End Users with Dry and Gel-Based Electrodes.

    PubMed

    Käthner, Ivo; Halder, Sebastian; Hintermüller, Christoph; Espinosa, Arnau; Guger, Christoph; Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Rafael-Palou, Xavier; Solà, Marc; Daly, Jean M; Armstrong, Elaine; Martin, Suzanne; Kübler, Andrea

    2017-01-01

    Current brain-computer interface (BCIs) software is often tailored to the needs of scientists and technicians and therefore complex to allow for versatile use. To facilitate home use of BCIs a multifunctional P300 BCI with a graphical user interface intended for non-expert set-up and control was designed and implemented. The system includes applications for spelling, web access, entertainment, artistic expression and environmental control. In addition to new software, it also includes new hardware for the recording of electroencephalogram (EEG) signals. The EEG system consists of a small and wireless amplifier attached to a cap that can be equipped with gel-based or dry contact electrodes. The system was systematically evaluated with a healthy sample, and targeted end users of BCI technology, i.e., people with a varying degree of motor impairment tested the BCI in a series of individual case studies. Usability was assessed in terms of effectiveness, efficiency and satisfaction. Feedback of users was gathered with structured questionnaires. Two groups of healthy participants completed an experimental protocol with the gel-based and the dry contact electrodes ( N = 10 each). The results demonstrated that all healthy participants gained control over the system and achieved satisfactory to high accuracies with both gel-based and dry electrodes (average error rates of 6 and 13%). Average satisfaction ratings were high, but certain aspects of the system such as the wearing comfort of the dry electrodes and design of the cap, and speed (in both groups) were criticized by some participants. Six potential end users tested the system during supervised sessions. The achieved accuracies varied greatly from no control to high control with accuracies comparable to that of healthy volunteers. Satisfaction ratings of the two end-users that gained control of the system were lower as compared to healthy participants. The advantages and disadvantages of the BCI and its applications are discussed and suggestions are presented for improvements to pave the way for user friendly BCIs intended to be used as assistive technology by persons with severe paralysis.

  4. Development of a task analysis tool to facilitate user interface design

    NASA Technical Reports Server (NTRS)

    Scholtz, Jean C.

    1992-01-01

    A good user interface is one that facilitates the user in carrying out his task. Such interfaces are difficult and costly to produce. The most important aspect in producing a good interface is the ability to communicate to the software designers what the user's task is. The Task Analysis Tool is a system for cooperative task analysis and specification of the user interface requirements. This tool is intended to serve as a guide to development of initial prototypes for user feedback.

  5. Analogue mouse pointer control via an online steady state visual evoked potential (SSVEP) brain-computer interface

    NASA Astrophysics Data System (ADS)

    Wilson, John J.; Palaniappan, Ramaswamy

    2011-04-01

    The steady state visual evoked protocol has recently become a popular paradigm in brain-computer interface (BCI) applications. Typically (regardless of function) these applications offer the user a binary selection of targets that perform correspondingly discrete actions. Such discrete control systems are appropriate for applications that are inherently isolated in nature, such as selecting numbers from a keypad to be dialled or letters from an alphabet to be spelled. However motivation exists for users to employ proportional control methods in intrinsically analogue tasks such as the movement of a mouse pointer. This paper introduces an online BCI in which control of a mouse pointer is directly proportional to a user's intent. Performance is measured over a series of pointer movement tasks and compared to the traditional discrete output approach. Analogue control allowed subjects to move the pointer faster to the cued target location compared to discrete output but suffers more undesired movements overall. Best performance is achieved when combining the threshold to movement of traditional discrete techniques with the range of movement offered by proportional control.

  6. Integration of multi-interface conversion channel using FPGA for modular photonic network

    NASA Astrophysics Data System (ADS)

    Janicki, Tomasz; Pozniak, Krzysztof T.; Romaniuk, Ryszard S.

    2010-09-01

    The article discusses the integration of different types of interfaces with FPGA circuits using a reconfigurable communication platform. The solution has been implemented in practice in a single node of a distributed measurement system. Construction of communication platform has been presented with its selected hardware modules, described in VHDL and implemented in FPGA circuits. The graphical user interface (GUI) has been described that allows a user to control the operation of the system. In the final part of the article selected practical solutions have been introduced. The whole measurement system resides on multi-gigabit optical network. The optical network construction is highly modular, reconfigurable and scalable.

  7. PAMLX: a graphical user interface for PAML.

    PubMed

    Xu, Bo; Yang, Ziheng

    2013-12-01

    This note announces pamlX, a graphical user interface/front end for the paml (for Phylogenetic Analysis by Maximum Likelihood) program package (Yang Z. 1997. PAML: a program package for phylogenetic analysis by maximum likelihood. Comput Appl Biosci. 13:555-556; Yang Z. 2007. PAML 4: Phylogenetic analysis by maximum likelihood. Mol Biol Evol. 24:1586-1591). pamlX is written in C++ using the Qt library and communicates with paml programs through files. It can be used to create, edit, and print control files for paml programs and to launch paml runs. The interface is available for free download at http://abacus.gene.ucl.ac.uk/software/paml.html.

  8. Formal analysis and automatic generation of user interfaces: approach, methodology, and an algorithm.

    PubMed

    Heymann, Michael; Degani, Asaf

    2007-04-01

    We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.

  9. A low-cost computer-controlled Arduino-based educational laboratory system for teaching the fundamentals of photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-11-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental concepts of semiconductor physics by exploring the process of an experimental physics inquiry. The system runs under the Windows operating system and is composed of a data acquisition/control board, a power supply and processing boards, sensing elements, a graphical user interface and data analysis software. The data acquisition/control board is based on the Arduino open source electronics prototyping platform. The graphical user interface and communication with the Arduino are developed in C# and C++ programming languages respectively, by using IDE Microsoft Visual Studio 2010 Professional, which is freely available to students. Finally, the data analysis is performed by using the open source, object-oriented framework ROOT. Currently the system supports five teaching activities, each one corresponding to an independent tab in the user interface. SolarInsight has been partially developed in the context of a diploma thesis conducted within the Technological Educational Institute of Piraeus under the co-supervision of the Physics and Electronic Computer Systems departments’ academic staff.

  10. Remapping residual coordination for controlling assistive devices and recovering motor functions.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias B; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2015-12-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human-machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The Design and Implementation of a Semi-Autonomous Surf-Zone Robot Using Advanced Sensors and a Common Robot Operating System

    DTIC Science & Technology

    2011-06-01

    effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the

  12. An Approach to Providing a User Interface for Military Computer-Aided- Instruction in 1980

    DTIC Science & Technology

    1975-11-01

    commercial terminals is the use of a microprocessor unit ( MPU ) LSI chip controller. This technology is flexible and economical •nd can be expected to...various «•gmentt. By using an MPU and developing a software capability, tha vendor can quickly and economically satisfy a large spsctrum of user...the basis for an effective and economical jser interface to military CAI systems. •a. sicumrv CLAMincATioH or THIS P**;:^*— D*. K*fn4) ^vmm m m r

  13. Embedded Control System for Smart Walking Assistance Device.

    PubMed

    Bosnak, Matevz; Skrjanc, Igor

    2017-03-01

    This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.

  14. Write, read and answer emails with a dry 'n' wireless brain-computer interface system.

    PubMed

    Pinegger, Andreas; Deckert, Lisa; Halder, Sebastian; Barry, Norbert; Faller, Josef; Käthner, Ivo; Hintermüller, Christoph; Wriessnegger, Selina C; Kübler, Andrea; Müller-Putz, Gernot R

    2014-01-01

    Brain-computer interface (BCI) users can control very complex applications such as multimedia players or even web browsers. Therefore, different biosignal acquisition systems are available to noninvasively measure the electrical activity of the brain, the electroencephalogram (EEG). To make BCIs more practical, hardware and software are nowadays designed more user centered and user friendly. In this paper we evaluated one of the latest innovations in the area of BCI: A wireless EEG amplifier with dry electrode technology combined with a web browser which enables BCI users to use standard webmail. With this system ten volunteers performed a daily life task: Write, read and answer an email. Experimental results of this study demonstrate the power of the introduced BCI system.

  15. Promoting autonomy in a smart home environment with a smarter interface.

    PubMed

    Brennan, C P; McCullagh, P J; Galway, L; Lightbody, G

    2015-01-01

    In the not too distant future, the median population age will tend towards 65; an age at which the need for dependency increases. Most older people want to remain autonomous and self-sufficient for as long as possible. As environments become smarter home automation solutions can be provided to support this aspiration. The technology discussed within this paper focuses on providing a home automation system that can be controlled by most users regardless of mobility restrictions, and hence it may be applicable to older people. It comprises a hybrid Brain-Computer Interface, home automation user interface and actuators. In the first instance, our system is controlled with conventional computer input, which is then replaced with eye tracking and finally a BCI and eye tracking collaboration. The systems have been assessed in terms of information throughput; benefits and limitations are evaluated.

  16. Adherence to Standards in the Development of E-Learning Programs

    ERIC Educational Resources Information Center

    Novacek, Paul F.

    2016-01-01

    Consistent user interface standards are necessary with the development of e-learning courseware, as they are the glue that binds the users' experience with their expectations. For example, the user controls for video playback were standardized many years ago, and we all benefit knowing a right-facing triangle signifies a play function, while dual…

  17. Standards for the user interface - Developing a user consensus. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.

    1987-01-01

    The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.

  18. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  19. A new variable temperature solution-solid interface scanning tunneling microscope.

    PubMed

    Jahanbekam, Abdolreza; Mazur, Ursula; Hipps, K W

    2014-10-01

    We present a new solution-solid (SS) interface scanning tunneling microscope design that enables imaging at high temperatures with low thermal drift and with volatile solvents. In this new design, distinct from the conventional designs, the entire microscope is surrounded in a controlled-temperature and controlled-atmosphere chamber. This allows users to take measurements at high temperatures while minimizing thermal drift. By incorporating an open solution reservoir in the chamber, solvent evaporation from the sample is minimized; allowing users to use volatile solvents for temperature dependent studies at high temperatures. The new design enables the user to image at the SS interface with some volatile solvents for long periods of time (>24 h). An increase in the nonlinearity of the piezoelectric scanner in the lateral direction as a function of temperature is addressed. A temperature dependent study of cobalt(II) octaethylporphyrin (CoOEP) at the toluene/Au(111) interface has been performed with this instrument. It is demonstrated that the lattice parameters remain constant within experimental error from 24 °C to 75 °C. Similar quality images were obtained over the entire temperature range. We report the unit cell of CoOEP at the toluene/Au(111) interface (based on two molecules per unit cell) to be A = (1.36 ± 0.04) nm, B = (2.51 ± 0.04) nm, and α = 97° ± 2°.

  20. Brain-Computer Interface application: auditory serial interface to control a two-class motor-imagery-based wheelchair.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier

    2017-05-30

    Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.

  1. GEECS (Generalized Equipment and Experiment Control System)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GONSALVES, ANTHONY; DESHMUKH, AALHAD

    2017-01-12

    GEECS (Generalized Equipment and Experiment Control System) monitors and controls equipment distributed across a network, performs experiments by scanning input variables, and collects and stores various types of data synchronously from devices. Examples of devices include cameras, motors and pressure gauges. GEEKS is based upon LabView graphical object oriented programming (GOOP), allowing for a modular and scalable framework. Data is published for subscription of an arbitrary number of variables over TCP. A secondary framework allows easy development of graphical user interfaces for a combined control of any available devices on the control system without the need of programming knowledge. Thismore » allows for rapid integration of GEECS into a wide variety of systems. A database interface provides for devise and process configuration while allowing the user to save large quantities of data to local or network drives.« less

  2. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (HP9000 SERIES 700/800 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  3. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (IBM RS/6000 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  4. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  5. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  6. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  7. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  8. Application of the user-centred design process according ISO 9241-210 in air traffic control.

    PubMed

    König, Christina; Hofmann, Thomas; Bruder, Ralph

    2012-01-01

    Designing a usable human machine interface for air traffic control is challenging and should follow approved methods. The ISO 9241-210 standard promises high usability of products by integrating future users and following an iterative process. This contribution describes the proceeding and first results of the analysis and application of ISO 9241-210 to develop a planning tool for air traffic controllers.

  9. Closeout of CRADA JSA 2012S004: Chapter 5, Integrated Control System, of the document of the ESS Conceptual Design Report, publicly available at https://europeanspallationsource.se/accelerator-documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Satogata, Todd

    2013-04-22

    The integrated control system (ICS) is responsible for the whole ESS machine and facility: accelerator, target, neutron scattering instruments and conventional facilities. This unified approach keeps the costs of development, maintenance and support relatively low. ESS has selected a standardised, field-proven controls framework, the Experimental Physics and Industrial Control System (EPICS), which was originally developed jointly by Argonne and Los Alamos National Laboratories. Complementing this selection are best practices and experience from similar facilities regarding platform standardisation, control system development and device integration and commissioning. The components of ICS include the control system core, the control boxes, the BLED databasemore » management system, and the human machine interface. The control system core is a set of systems and tools that make it possible for the control system to provide required data, information and services to engineers, operators, physicists and the facility itself. The core components are the timing system that makes possible clock synchronisation across the facility, the machine protection system (MPS) and the personnel protection system (PPS) that prevent damage to the machine and personnel, and a set of control system services. Control boxes are servers that control a collection of equipment (for example a radio frequency cavity). The integrated control system will include many control boxes that can be assigned to one supplier, such as an internal team, a collaborating institute or a commercial vendor. This approach facilitates a clear division of responsibilities and makes integration much easier. A control box is composed of a standardised hardware platform, components, development tools and services. On the top level, it interfaces with the core control system components (timing, MPS, PPS) and with the human-machine interface. At the bottom, it interfaces with the equipment and parts of the facility through a set of analog and digital signals, real-time control loops and other communication buses. The ICS central data management system is named BLED (beam line element databases). BLED is a set of databases, tools and services that is used to store, manage and access data. It holds vital control system configuration and physics-related (lattice) information about the accelerator, target and instruments. It facilitates control system configuration by bringing together direct input-output controller (IOC) con guration and real-time data from proton and neutron beam line models. BLED also simplifies development and speeds up the code-test-debug cycle. The set of tools that access BLED will be tailored to the needs of different categories of users, such as ESS staff physicists, engineers, and operators; external partner laboratories; and visiting experimental instrument users. The human-machine interface is vital to providing a high-quality experience to ICS users. It encompasses a wide array of devices and software tools, from control room screens to engineer terminal windows; from beam physics data tools to post-mortem data analysis tools. It serves users with a wide range of skills from widely varied backgrounds. The Controls Group is developing a set of user profiles to accommodate this diverse range of use-cases and users.« less

  10. A Mobile Food Record For Integrated Dietary Assessment*

    PubMed Central

    Ahmad, Ziad; Kerr, Deborah A.; Bosch, Marc; Boushey, Carol J.; Delp, Edward J.; Khanna, Nitin; Zhu, Fengqing

    2017-01-01

    This paper presents an integrated dietary assessment system based on food image analysis that uses mobile devices or smartphones. We describe two components of our integrated system: a mobile application and an image-based food nutrient database that is connected to the mobile application. An easy-to-use mobile application user interface is described that was designed based on user preferences as well as the requirements of the image analysis methods. The user interface is validated by user feedback collected from several studies. Food nutrient and image databases are also described which facilitates image-based dietary assessment and enable dietitians and other healthcare professionals to monitor patients dietary intake in real-time. The system has been tested and validated in several user studies involving more than 500 users who took more than 60,000 food images under controlled and community-dwelling conditions. PMID:28691119

  11. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  12. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  13. Novel Networked Remote Laboratory Architecture for Open Connectivity Based on PLC-OPC-LabVIEW-EJS Integration. Application in Remote Fuzzy Control and Sensors Data Acquisition.

    PubMed

    González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel

    2016-10-31

    In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented.

  14. Novel Networked Remote Laboratory Architecture for Open Connectivity Based on PLC-OPC-LabVIEW-EJS Integration. Application in Remote Fuzzy Control and Sensors Data Acquisition

    PubMed Central

    González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel

    2016-01-01

    In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented. PMID:27809229

  15. System for assisted mobility using eye movements based on electrooculography.

    PubMed

    Barea, Rafael; Boquete, Luciano; Mazo, Manuel; López, Elena

    2002-12-01

    This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.

  16. The hybrid UNIX controller for real-time data acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huesman, R.H.; Klein, G.J.; Fleming, T.K.

    1996-06-01

    The authors describe a hybrid data acquisition architecture integrating a conventional UNIX workstation with CAMAC-based real-time hardware. The system combines the high-level programming simplicity and user interface of a UNIX workstation with the low-level timing control available from conventional real-time hardware. They detail this architecture as it has been implemented for control of the Donner 600-Crystal Positron Tomograph (PET600). Low-level data acquisition is carried out in this system using eight LeCroy 3588 histogrammers, which together after derandomization, acquire events at rates up to 4 MHz, and two dedicated Motorola 6809 microprocessors, which arbitrate fine timing control during acquisition. A SUNmore » Microsystems UNIX workstation is used for high-level control, allowing an easily extensible user interface in an X-Windows environment, as well as real-time communications to the low-level acquisition units. Communication between the high- and low-level units is carried out via a Jorway 73A SCSI-CAMAC crate controller and a serial interface. For this application, the hybrid configuration segments low from high-level control for ease of maintenance and provided a low-cost upgrade from dated high-level control hardware.« less

  17. Quantifying the role of motor imagery in brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf

    2016-04-01

    Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity).

  18. Quantifying the role of motor imagery in brain-machine interfaces

    PubMed Central

    Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf

    2016-01-01

    Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity). PMID:27052520

  19. Body machine interfaces for neuromotor rehabilitation: a case study.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Chen, David; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2014-01-01

    High-level spinal cord injury (SCI) survivors face every day two related problems: recovering motor skills and regaining functional independence. Body machine interfaces (BoMIs) empower people with sever motor disabilities with the ability to control an external device, but they also offer the opportunity to focus concurrently on achieving rehabilitative goals. In this study we developed a portable, and low-cost BoMI that addresses both problems. The BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer monitor. By controlling the cursor, the user can perform functional tasks, such as entering text and playing games. This framework also allows the mapping between the body and the cursor space to be modified, gradually challenging the user to exercise more impaired movements. With this approach, we were able to change the behavior of our SCI subject, who initially used almost exclusively his less impaired degrees of freedom - on the left side - for controlling the BoMI. At the end of the few practice sessions he had restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom involved in the control of the interface. This is the first proof of concept that our BoMI can be used to control assistive devices and reach specific rehabilitative goals simultaneously.

  20. A two-class self-paced BCI to control a robot in four directions.

    PubMed

    Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro

    2011-01-01

    In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE

  1. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  2. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  3. Actuator digital interface unit (AIU). [control units for space shuttle data system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Alternate versions of the actuator interface unit are presented. One alternate is a dual-failure immune configuration which feeds a look-and-switch dual-failure immune hydraulic system. The other alternate is a single-failure immune configuration which feeds a majority voting hydraulic system. Both systems communicate with the data bus through data terminals dedicated to each user subsystem. Both operational control data and configuration control information are processed in and out of the subsystem via the data terminal which yields the actuator interface subsystem, self-managing within its failure immunity capability.

  4. Command & Control (C2) Systems Acquisition Study

    DTIC Science & Technology

    1982-09-01

    34: 0 The movement of substantial capability closer to individual users with significant improvements in the interface between the user and the system...description of the overall capability desired; (2) an archi- teLLural framework where evolution can occur with minimum subsequent redesign; and (3) a

  5. Evaluation of navigation interfaces in virtual environments

    NASA Astrophysics Data System (ADS)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  6. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC a

  7. IAC-1.5 - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Vos, R. G.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and a database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a database, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automating data transfer among analysis programs. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation modules are supplied for building and viewing models. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. 3) System dynamics - A DISCOS interface allows full use of this simulation program for either nonlinear time domain analysis or linear frequency domain analysis. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. 5) Graphics - The graphics packages PLOT and MOSAIC are included in IAC. PLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc., while MOSAIC generates color raster displays of either tabular of array type data. Either DI3000 or PLOT-10 graphics software is required for full graphics capability. IAC is available by license for a period of 10 years to approved licensees. The licensed program product includes one complete set of supporting documentation. Additional copies of the documentation may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The basic central memory requirement is approximately 750KB. IAC includes the executive system, graphics modules, a database, general utilities, and the interfaces to all analysis and controls programs described above. Source code is provided for the control programs ORACLS, SAMSAN, NBOD2, and DISCOS. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. IAC was developed in 1985.

  8. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study

    PubMed Central

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-01-01

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the “Florida Secundaria” high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable). PMID:27792132

  9. Deep Space Network Antenna Logic Controller

    NASA Technical Reports Server (NTRS)

    Ahlstrom, Harlow; Morgan, Scott; Hames, Peter; Strain, Martha; Owen, Christopher; Shimizu, Kenneth; Wilson, Karen; Shaller, David; Doktomomtaz, Said; Leung, Patrick

    2007-01-01

    The Antenna Logic Controller (ALC) software controls and monitors the motion control equipment of the 4,000-metric-ton structure of the Deep Space Network 70-meter antenna. This program coordinates the control of 42 hydraulic pumps, while monitoring several interlocks for personnel and equipment safety. Remote operation of the ALC runs via the Antenna Monitor & Control (AMC) computer, which orchestrates the tracking functions of the entire antenna. This software provides a graphical user interface for local control, monitoring, and identification of faults as well as, at a high level, providing for the digital control of the axis brakes so that the servo of the AMC may control the motion of the antenna. Specific functions of the ALC also include routines for startup in cold weather, controlled shutdown for both normal and fault situations, and pump switching on failure. The increased monitoring, the ability to trend key performance characteristics, the improved fault detection and recovery, the centralization of all control at a single panel, and the simplification of the user interface have all reduced the required workforce to run 70-meter antennas. The ALC also increases the antenna availability by reducing the time required to start up the antenna, to diagnose faults, and by providing additional insight into the performance of key parameters that aid in preventive maintenance to avoid key element failure. The ALC User Display (AUD) is a graphical user interface with hierarchical display structure, which provides high-level status information to the operation of the ALC, as well as detailed information for virtually all aspects of the ALC via drill-down displays. The operational status of an item, be it a function or assembly, is shown in the higher-level display. By pressing the item on the display screen, a new screen opens to show more detail of the function/assembly. Navigation tools and the map button allow immediate access to all screens.

  10. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  11. Designing an operator interface? Consider user`s `psychology`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toffer, D.E.

    The modern operator interface is a channel of communication between operators and the plant that, ideally, provides them with information necessary to keep the plant running at maximum efficiency. Advances in automation technology have increased information flow from the field to the screen. New and improved Supervisory Control and Data Acquisition (SCADA) packages provide designers with powerful and open design considerations. All too often, however, systems go to the field designed for the software rather than the operator. Plant operators` jobs have changed fundamentally, from controlling their plants from out in the field to doing so from within control rooms.more » Control room-based operation does not denote idleness. Trained operators should be engaged in examination of plant status and cognitive evaluation of plant efficiencies. Designers who are extremely computer literate, often do not consider demographics of field operators. Many field operators have little knowledge of modern computer systems. As a result, they do not take full advantage of the interface`s capabilities. Designers often fail to understand the true nature of how operators run their plants. To aid field operators, designers must provide familiar controls and intuitive choices. To achieve success in interface design, it is necessary to understand the ways in which humans think conceptually, and to understand how they process this information physically. The physical and the conceptual are closely related when working with any type of interface. Designers should ask themselves: {open_quotes}What type of information is useful to the field operator?{close_quotes} Let`s explore an integration model that contains the following key elements: (1) Easily navigated menus; (2) Reduced chances for misunderstanding; (3) Accurate representations of the plant or operation; (4) Consistent and predictable operation; (5) A pleasant and engaging interface that conforms to the operator`s expectations. 4 figs.« less

  12. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  13. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  14. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  15. Comparison of tongue interface with keyboard for control of an assistive robotic arm.

    PubMed

    Struijk, Lotte N S Andreasen; Lontis, Romulus

    2017-07-01

    This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.

  16. User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations

    NASA Technical Reports Server (NTRS)

    Liszka, Kathy J.; Holtz, Allen P.

    2004-01-01

    A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.

  17. A web based Radiation Oncology Dose Manager with a rich User Interface developed using AJAX, ruby, dynamic XHTML and the new Yahoo/EXT User Interface Library.

    PubMed

    Vali, Faisal; Hong, Robert

    2007-10-11

    With the evolution of AJAX, ruby on rails, advanced dynamic XHTML technologies and the advent of powerful user interface libraries for javascript (EXT, Yahoo User Interface Library), developers now have the ability to provide truly rich interfaces within web browsers, with reasonable effort and without third-party plugins. We designed and developed an example of such a solution. The User Interface allows radiation oncology practices to intuitively manage different dose fractionation schemes by helping estimate total dose to irradiated organs.

  18. Monitoring and controlling ATLAS data management: The Rucio web user interface

    NASA Astrophysics Data System (ADS)

    Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.

  19. Dynamics simulation and controller interfacing for legged robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reichler, J.A.; Delcomyn, F.

    2000-01-01

    Dynamics simulation can play a critical role in the engineering of robotic control code, and there exist a variety of strategies both for building physical models and for interacting with these models. This paper presents an approach to dynamics simulation and controller interfacing for legged robots, and contrasts it to existing approaches. The authors describe dynamics algorithms and contact-resolution strategies for multibody articulated mobile robots based on the decoupled tree-structure approach, and present a novel scripting language that provides a unified framework for control-code interfacing, user-interface design, and data analysis. Special emphasis is placed on facilitating the rapid integration ofmore » control algorithms written in a standard object-oriented language (C++), the production of modular, distributed, reusable controllers, and the use of parameterized signal-transmission properties such as delay, sampling rate, and noise.« less

  20. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  1. Interface Metaphors for Interactive Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Robert J.; Blaha, Leslie M.

    To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less

  2. Representing Graphical User Interfaces with Sound: A Review of Approaches

    ERIC Educational Resources Information Center

    Ratanasit, Dan; Moore, Melody M.

    2005-01-01

    The inability of computer users who are visually impaired to access graphical user interfaces (GUIs) has led researchers to propose approaches for adapting GUIs to auditory interfaces, with the goal of providing access for visually impaired people. This article outlines the issues involved in nonvisual access to graphical user interfaces, reviews…

  3. Design and usability evaluation of user-centered and visual-based aids for dietary food measurement on mobile devices in a randomized controlled trial.

    PubMed

    Liu, Ying-Chieh; Chen, Chien-Hung; Lee, Chien-Wei; Lin, Yu-Sheng; Chen, Hsin-Yun; Yeh, Jou-Yin; Chiu, Sherry Yueh-Hsia

    2016-12-01

    We designed and developed two interactive apps interfaces for dietary food measurements on mobile devices. The user-centered designs of both the IPI (interactive photo interface) and the SBI (sketching-based interface) were evaluated. Four types of outcomes were assessed to evaluate the usability of mobile devices for dietary measurements, including accuracy, absolute weight differences, and the response time to determine the efficacy of food measurements. The IPI presented users with images of pre-determined portion sizes of a specific food and allowed users to scan and then select the most representative image matching the food that they were measuring. The SBI required users to relate the food shape to a readily available comparator (e.g., credit card) and scribble to shade in the appropriate area. A randomized controlled trial was conducted to evaluate their usability. A total of 108 participants were randomly assigned into the following three groups: the IPI (n=36) and SBI (n=38) experimental groups and the traditional life-size photo (TLP) group as the control. A total of 18 types of food items with 3-4 different weights were randomly selected for assessment by each type. The independent Chi-square test and t-test were performed for the dichotomous and continuous variable analyses, respectively. The total accuracy rates were 66.98%, 44.15%, and 72.06% for the IPI, SBI, and TLP, respectively. No significant difference was observed between the IPI and TLP, regardless of the accuracy proportion or weight differences. The SBI accuracy rates were significantly lower than the IPI and TLP accuracy rates, especially for several spooned, square cube, and sliced pie food items. The time needed to complete the operation assessment by the user was significantly lower for the IPI than for the SBI. Our study corroborates that the user-centered visual-based design of the IPI on a mobile device is comparable the TLP in terms of the usability for dietary food measurements. However, improvements are needed because both the IPI and TLP accuracies associated with some food shapes were lower than 60%. The SBI is not yet a viable aid. This innovative alternative required further improvements to the user interface. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Vision based interface system for hands free control of an intelligent wheelchair

    PubMed Central

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-01-01

    Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people. PMID:19660132

  5. A user-system interface agent

    NASA Technical Reports Server (NTRS)

    Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua

    1995-01-01

    Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.

  6. Marshall Space Flight Center Telescience Resource Kit

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Telescience Resource Kit (TReK) is a suite of software applications that can be used to monitor and control assets in space or on the ground. The Telescience Resource Kit was originally developed for the International Space Station program. Since then it has been used to support a variety of NASA programs and projects including the WB-57 Ascent Vehicle Experiment (WAVE) project, the Fast Affordable Science and Technology Satellite (FASTSAT) project, and the Constellation Program. The Payloads Operations Center (POC), also known as the Payload Operations Integration Center (POIC), provides the capability for payload users to operate their payloads at their home sites. In this environment, TReK provides local ground support system services and an interface to utilize remote services provided by the POC. TReK provides ground system services for local and remote payload user sites including International Partner sites, Telescience Support Centers, and U.S. Investigator sites in over 40 locations worldwide. General Capabilities: Support for various data interfaces such as User Datagram Protocol, Transmission Control Protocol, and Serial interfaces. Data Services - retrieve, process, record, playback, forward, and display data (ground based data or telemetry data). Command - create, modify, send, and track commands. Command Management - Configure one TReK system to serve as a command server/filter for other TReK systems. Database - databases are used to store telemetry and command definition information. Application Programming Interface (API) - ANSI C interface compatible with commercial products such as Visual C++, Visual Basic, LabVIEW, Borland C++, etc. The TReK API provides a bridge for users to develop software to access and extend TReK services. Environments - development, test, simulations, training, and flight. Includes standalone training simulators.

  7. A Sensor Failure Simulator for Control System Reliability Studies

    NASA Technical Reports Server (NTRS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    1986-01-01

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peffer, Therese; Blumstein, Carl; Culler, David

    The Project uses state-of-the-art computer science to extend the benefits of Building Automation Systems (BAS) typically found in large buildings (>100,000 square foot) to medium-sized commercial buildings (<50,000 sq ft). The BAS developed in this project, termed OpenBAS, uses an open-source and open software architecture platform, user interface, and plug-and-play control devices to facilitate adoption of energy efficiency strategies in the commercial building sector throughout the United States. At the heart of this “turn key” BAS is the platform with three types of controllers—thermostat, lighting controller, and general controller—that are easily “discovered” by the platform in a plug-and-play fashion. Themore » user interface showcases the platform and provides the control system set-up, system status display and means of automatically mapping the control points in the system.« less

  9. A sensor failure simulator for control system reliability studies

    NASA Astrophysics Data System (ADS)

    Melcher, K. J.; Delaat, J. C.; Merrill, W. C.; Oberle, L. G.; Sadler, G. G.; Schaefer, J. H.

    A real-time Sensor Failure Simulator (SFS) was designed and assembled for the Advanced Detection, Isolation, and Accommodation (ADIA) program. Various designs were considered. The design chosen features an IBM-PC/XT. The PC is used to drive analog circuitry for simulating sensor failures in real-time. A user defined scenario describes the failure simulation for each of the five incoming sensor signals. Capabilities exist for editing, saving, and retrieving the failure scenarios. The SFS has been tested closed-loop with the Controls Interface and Monitoring (CIM) unit, the ADIA control, and a real-time F100 hybrid simulation. From a productivity viewpoint, the menu driven user interface has proven to be efficient and easy to use. From a real-time viewpoint, the software controlling the simulation loop executes at greater than 100 cycles/sec.

  10. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.

  11. Remote Control and Data Acquisition: A Case Study

    NASA Technical Reports Server (NTRS)

    DeGennaro, Alfred J.; Wilkinson, R. Allen

    2000-01-01

    This paper details software tools developed to remotely command experimental apparatus, and to acquire and visualize the associated data in soft real time. The work was undertaken because commercial products failed to meet the needs. This work has identified six key factors intrinsic to development of quality research laboratory software. Capabilities include access to all new instrument functions without any programming or dependence on others to write drivers or virtual instruments, simple full screen text-based experiment configuration and control user interface, months of continuous experiment run-times, order of 1% CPU load for condensed matter physics experiment described here, very little imposition of software tool choices on remote users, and total remote control from anywhere in the world over the Internet or from home on a 56 Kb modem as if the user is sitting in the laboratory. This work yielded a set of simple robust tools that are highly reliable, resource conserving, extensible, and versatile, with a uniform simple interface.

  12. An Advanced NSSS Integrity Monitoring System for Shin-Kori Nuclear Units 3 and 4

    NASA Astrophysics Data System (ADS)

    Oh, Yang Gyun; Galin, Scott R.; Lee, Sang Jeong

    2010-12-01

    The advanced design features of NSSS (Nuclear Steam Supply System) Integrity Monitoring System for Shin-Kori Nuclear Units 3 and 4 are summarized herein. During the overall system design and detailed component design processes, many design improvements have been made for the system. The major design changes are: 1) the application of a common software platform for all subsystems, 2) the implementation of remote access, control and monitoring capabilities, and 3) the equipment redesign and rearrangement that has simplified the system architecture. Changes give an effect on cabinet size, number of cables, cyber-security, graphic user interfaces, and interfaces with other monitoring systems. The system installation and operation for Shin-Kori Nuclear Units 3 and 4 will be more convenient than those for previous Korean nuclear units in view of its remote control capability, automated test functions, improved user interface functions, and much less cabling.

  13. Application driven interface generation for EASIE. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kao, Ya-Chen

    1992-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.

  14. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  15. Study of the human/ITS interface issues on the design of traffic information bulletin board and traffic control signal displays

    DOT National Transportation Integrated Search

    2002-10-01

    The success of automation for intelligent transportation systems is ultimately contingent upon the Interface between the users (humans) and the system (ITS). The issues of variable message signs (VMS) and traffic signal device (TSD) design were studi...

  16. Adaptive Phase Delay Generator

    NASA Technical Reports Server (NTRS)

    Greer, Lawrence

    2013-01-01

    There are several experimental setups involving rotating machinery that require some form of synchronization. The adaptive phase delay generator (APDG) the Bencic-1000 is a flexible instrument that allows the user to generate pulses synchronized to the rising edge of a tachometer signal from any piece of rotating machinery. These synchronized pulses can vary by the delay angle, pulse width, number of pulses per period, number of skipped pulses, and total number of pulses. Due to the design of the pulse generator, any and all of these parameters can be changed independently, yielding an unparalleled level of versatility. There are two user interfaces to the APDG. The first is a LabVIEW program that has the advantage of displaying all of the pulse parameters and input signal data within one neatly organized window on the PC monitor. Furthermore, the LabVIEW interface plots the rpm of the two input signal channels in real time. The second user interface is a handheld portable device that goes anywhere a computer is not accessible. It consists of a liquid-crystal display and keypad, which enable the user to control the unit by scrolling through a host of command menus and parameter listings. The APDG combines all of the desired synchronization control into one unit. The experimenter can adjust the delay, pulse width, pulse count, number of skipped pulses, and produce a specified number of pulses per revolution. Each of these parameters can be changed independently, providing an unparalleled level of versatility when synchronizing hardware to a host of rotating machinery. The APDG allows experimenters to set up quickly and generate a host of synchronizing configurations using a simple user interface, which hopefully leads to faster results.

  17. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  18. 10 CFR Appendix B to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... factor. 1.2 “Anti-sweat heater” means a device incorporated into the design of a freezer to prevent the accumulation of moisture on exterior or interior surfaces of the cabinet. 1.3 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat...

  19. Technical Requirements Analysis and Control Systems (TRACS) Initial Operating Capability (IOC) documentation

    NASA Technical Reports Server (NTRS)

    Hammond, Dana P.

    1991-01-01

    The Technical Requirements Analysis and Control Systems (TRACS) software package is described. TRACS offers supplemental tools for the analysis, control, and interchange of project requirements. This package provides the fundamental capability to analyze and control requirements, serves a focal point for project requirements, and integrates a system that supports efficient and consistent operations. TRACS uses relational data base technology (ORACLE) in a stand alone or in a distributed environment that can be used to coordinate the activities required to support a project through its entire life cycle. TRACS uses a set of keyword and mouse driven screens (HyperCard) which imposes adherence through a controlled user interface. The user interface provides an interactive capability to interrogate the data base and to display or print project requirement information. TRACS has a limited report capability, but can be extended with PostScript conventions.

  20. A study of social information control affordances and gender difference in Facebook self-presentation.

    PubMed

    Kuo, Feng-Yang; Tseng, Chih-Yi; Tseng, Fan-Chuan; Lin, Cathy S

    2013-09-01

    Affordances refer to how interface features of an IT artifact, perceived by its users in terms of their potentials for action, may predict the intensity of usage. This study investigates three social information affordances for expressive information control, privacy information control, and image information control in Facebook. The results show that the three affordances can significantly explain how Facebook's interface designs facilitate users' self-presentation activities. In addition, the findings reveal that males are more engaged in expressing information than females, while females are more involved in privacy control than males. A practical application of our study is to compare and contrast the level of affordances offered by various social network sites (SNS) like Facebook and Twitter, as well as differences in online self-presentations across cultures. Our approach can therefore be useful to investigate how SNS design features can be tailored to specific gender and culture needs.

  1. Passive wireless tags for tongue controlled assistive technology interfaces.

    PubMed

    Rakibet, Osman O; Horne, Robert J; Kelly, Stephen W; Batchelor, John C

    2016-03-01

    Tongue control with low profile, passive mouth tags is demonstrated as a human-device interface by communicating values of tongue-tag separation over a wireless link. Confusion matrices are provided to demonstrate user accuracy in targeting by tongue position. Accuracy is found to increase dramatically after short training sequences with errors falling close to 1% in magnitude with zero missed targets. The rate at which users are able to learn accurate targeting with high accuracy indicates that this is an intuitive device to operate. The significance of the work is that innovative very unobtrusive, wireless tags can be used to provide intuitive human-computer interfaces based on low cost and disposable mouth mounted technology. With the development of an appropriate reading system, control of assistive devices such as computer mice or wheelchairs could be possible for tetraplegics and others who retain fine motor control capability of their tongues. The tags contain no battery and are intended to fit directly on the hard palate, detecting tongue position in the mouth with no need for tongue piercings.

  2. Comparing two anesthesia information management system user interfaces: a usability evaluation.

    PubMed

    Wanderer, Jonathan P; Rao, Anoop V; Rothwell, Sarah H; Ehrenfeld, Jesse M

    2012-11-01

    Anesthesia information management systems (AIMS) have been developed by multiple vendors and are deployed in thousands of operating rooms around the world, yet not much is known about measuring and improving AIMS usability. We developed a methodology for evaluating AIMS usability in a low-fidelity simulated clinical environment and used it to compare an existing user interface with a revised version. We hypothesized that the revised user interface would be more useable. In a low-fidelity simulated clinical environment, twenty anesthesia providers documented essential anesthetic information for the start of the case using both an existing and a revised user interface. Participants had not used the revised user interface previously and completed a brief training exercise prior to the study task. All participants completed a workload assessment and a satisfaction survey. All sessions were recorded. Multiple usability metrics were measured. The primary outcome was documentation accuracy. Secondary outcomes were perceived workload, number of documentation steps, number of user interactions, and documentation time. The interfaces were compared and design problems were identified by analyzing recorded sessions and survey results. Use of the revised user interface was shown to improve documentation accuracy from 85.1% to 92.4%, a difference of 7.3% (95% confidence interval [CI] for the difference 1.8 to 12.7). The revised user interface decreased the number of user interactions by 6.5 for intravenous documentation (95% CI 2.9 to 10.1) and by 16.1 for airway documentation (95% CI 11.1 to 21.1). The revised user interface required 3.8 fewer documentation steps (95% CI 2.3 to 5.4). Airway documentation time was reduced by 30.5 seconds with the revised workflow (95% CI 8.5 to 52.4). There were no significant time differences noted in intravenous documentation or in total task time. No difference in perceived workload was found between the user interfaces. Two user interface design problems were identified in the revised user interface. The usability of anesthesia information management systems can be evaluated using a low-fidelity simulated clinical environment. User testing of the revised user interface showed improvement in some usability metrics and highlighted areas for further revision. Vendors of AIMS and those who use them should consider adopting methods to evaluate and improve AIMS usability.

  3. The control of float zone interfaces by the use of selected boundary conditions

    NASA Technical Reports Server (NTRS)

    Foster, L. M.; Mcintosh, J.

    1983-01-01

    The main goal of the float zone crystal growth project of NASA's Materials Processing in Space Program is to thoroughly understand the molten zone/freezing crystal system and all the mechanisms that govern this system. The surface boundary conditions required to give flat float zone solid melt interfaces were studied and computed. The results provide float zone furnace designers with better methods for controlling solid melt interface shapes and for computing thermal profiles and gradients. Documentation and a user's guide were provided for the computer software.

  4. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).

  5. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    NASA Technical Reports Server (NTRS)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.

  6. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.

    PubMed

    Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.

  7. Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery

    PubMed Central

    Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.

    2016-01-01

    The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501

  8. CDC/1000: a Control Data Corporation remote batch terminal emulator for Hewlett-Packard minicomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, D.E.

    1981-02-01

    The Control Data Corporation Type 200 User Terminal utilizes a unique communications protocol to provide users with batch mode remote terminal access to Control Data computers. CDC/1000 is a software subsystem that implements this protocol on Hewlett-Packard minicomputers running the Real Time Executive III, IV, or IVB operating systems. This report provides brief descriptions of the various software modules comprising CDC/1000, and contains detailed instructions for integrating CDC/1000 into the Hewlett Packard operating system and for operating UTERM, the user interface program for CDC/1000. 6 figures.

  9. Starting Over: Current Issues in Online Catalog User Interface Design.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1992-01-01

    Discussion of online catalogs focuses on issues in interface design. Issues addressed include understanding the user base; common user access (CUA) with personal computers; common command language (CCL); hyperlinks; screen design issues; differences from card catalogs; indexes; graphic user interfaces (GUIs); color; online help; and remote users.…

  10. An assessment of the real-time application capabilities of the SIFT computer system

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  11. Software Implemented Fault-Tolerant (SIFT) user's guide

    NASA Technical Reports Server (NTRS)

    Green, D. F., Jr.; Palumbo, D. L.; Baltrus, D. W.

    1984-01-01

    Program development for a Software Implemented Fault Tolerant (SIFT) computer system is accomplished in the NASA LaRC AIRLAB facility using a DEC VAX-11 to interface with eight Bendix BDX 930 flight control processors. The interface software which provides this SIFT program development capability was developed by AIRLAB personnel. This technical memorandum describes the application and design of this software in detail, and is intended to assist both the user in performance of SIFT research and the systems programmer responsible for maintaining and/or upgrading the SIFT programming environment.

  12. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  13. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  14. Integrated Model for E-Learning Acceptance

    NASA Astrophysics Data System (ADS)

    Ramadiani; Rodziah, A.; Hasan, S. M.; Rusli, A.; Noraini, C.

    2016-01-01

    E-learning is not going to work if the system is not used in accordance with user needs. User Interface is very important to encourage using the application. Many theories had discuss about user interface usability evaluation and technology acceptance separately, actually why we do not make it correlation between interface usability evaluation and user acceptance to enhance e-learning process. Therefore, the evaluation model for e-learning interface acceptance is considered important to investigate. The aim of this study is to propose the integrated e-learning user interface acceptance evaluation model. This model was combined some theories of e-learning interface measurement such as, user learning style, usability evaluation, and the user benefit. We formulated in constructive questionnaires which were shared at 125 English Language School (ELS) students. This research statistics used Structural Equation Model using LISREL v8.80 and MANOVA analysis.

  15. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  16. WinTICS-24 --- A Telescope Control Interface for MS Windows

    NASA Astrophysics Data System (ADS)

    Hawkins, R. Lee

    1995-12-01

    WinTICS-24 is a telescope control system interface and observing assistant written in Visual Basic for MS Windows. It provides the ability to control a telescope and up to 3 other instruments via the serial ports on an IBM-PC compatible computer, all from one consistent user interface. In addition to telescope control, WinTICS contains an observing logbook, trouble log (which can automatically email its entries to a responsible person), lunar phase display, object database (which allows the observer to type in the name of an object and automatically slew to it), a time of minimum calculator for eclipsing binary stars, and an interface to the Guide CD-ROM for bringing up finder charts of the current telescope coordinates. Currently WinTICS supports control of DFM telescopes, but is easily adaptable to other telescopes and instrumentation.

  17. Remapping residual coordination for controlling assistive devices and recovering motor functions

    PubMed Central

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias; Mussa-Ivaldi, Ferdinando A.; Casadio, Maura

    2015-01-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any single well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human–machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user’s residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. PMID:26341935

  18. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutin

  19. A Review and Reappraisal of Adaptive Human-Computer Interfaces in Complex Control Systems

    DTIC Science & Technology

    2006-08-01

    maneuverability measures. The cost elements were expressed as fuzzy membership functions. Figure 9 shows the flowchart of the route planner. A fuzzy navigator...and updating of the user model, which contains information about three generic stereotypes ( beginner , intermediate and expert users) plus an

  20. Proximity Displays for Access Control

    ERIC Educational Resources Information Center

    Vaniea, Kami

    2012-01-01

    Managing access to shared digital information, such as photographs and documents. is difficult for end users who are accumulating an increasingly large and diverse collection of data that they want to share with others. Current policy-management solutions require a user to proactively seek out and open a separate policy-management interface when…

  1. User Interface Technology for Formal Specification Development

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.

  2. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  3. The Keck keyword layer

    NASA Technical Reports Server (NTRS)

    Conrad, A. R.; Lupton, W. F.

    1992-01-01

    Each Keck instrument presents a consistent software view to the user interface programmer. The view consists of a small library of functions, which are identical for all instruments, and a large set of keywords, that vary from instrument to instrument. All knowledge of the underlying task structure is hidden from the application programmer by the keyword layer. Image capture software uses the same function library to collect data for the image header. Because the image capture software and the instrument control software are built on top of the same keyword layer, a given observation can be 'replayed' by extracting keyword-value pairs from the image header and passing them back to the control system. The keyword layer features non-blocking as well as blocking I/O. A non-blocking keyword write operation (such as setting a filter position) specifies a callback to be invoked when the operation is complete. A non-blocking keyword read operation specifies a callback to be invoked whenever the keyword changes state. The keyword-callback style meshes well with the widget-callback style commonly used in X window programs. The first keyword library was built for the two Keck optical instruments. More recently, keyword libraries have been developed for the infrared instruments and for telescope control. Although the underlying mechanisms used for inter-process communication by each of these systems vary widely (Lick MUSIC, Sun RPC, and direct socket I/O, respectively), a basic user interface has been written that can be used with any of these systems. Since the keyword libraries are bound to user interface programs dynamically at run time, only a single set of user interface executables is needed. For example, the same program, 'xshow', can be used to display continuously the telescope's position, the time left in an instrument's exposure, or both values simultaneously. Less generic tools that operate on specific keywords, for example an X display that controls optical instrument exposures, have also been written using the keyword layer.

  4. A study of usability principles and interface design for mobile e-books.

    PubMed

    Wang, Chao-Ming; Huang, Ching-Hua

    2015-01-01

    This study examined usability principles and interface designs in order to understand the relationship between the intentions of mobile e-book interface designs and users' perceptions. First, this study summarised 4 usability principles and 16 interface attributes, in order to conduct usability testing and questionnaire survey by referring to Nielsen (1993), Norman (2002), and Yeh (2010), who proposed the usability principles. Second, this study used the interviews to explore the perceptions and behaviours of user operations through senior users of multi-touch prototype devices. The results of this study are as follows: (1) users' behaviour of operating an interactive interface is related to user prior experience; (2) users' rating of the visibility principle is related to users' subjective perception but not related to user prior experience; however, users' ratings of the ease, efficiency, and enjoyment principles are related to user prior experience; (3) the interview survey reveals that the key attributes affecting users' behaviour of operating an interface include aesthetics, achievement, and friendliness. This study conducts experiments to explore the effects of users’ prior multi-touch experience on users’ behaviour of operating a mobile e-book interface and users’ rating of usability principles. Both qualitative and quantitative data analyses were performed. By applying protocol analysis, key attributes affecting users’ behaviour of operation were determined.

  5. How to Develop a User Interface That Your Real Users Will Love

    ERIC Educational Resources Information Center

    Phillips, Donald

    2012-01-01

    A "user interface" is the part of an interactive system that bridges the user and the underlying functionality of the system. But people sometimes forget that the best interfaces will provide a platform to optimize the users' interactions so that they support and extend the users' activities in effective, useful, and usable ways. To look at it…

  6. Make E-Learning Effortless! Impact of a Redesigned User Interface on Usability through the Application of an Affordance Design Approach

    ERIC Educational Resources Information Center

    Park, Hyungjoo; Song, Hae-Deok

    2015-01-01

    Given that a user interface interacts with users, a critical factor to be considered in improving the usability of an e-learning user interface is user-friendliness. Affordances enable users to more easily approach and engage in learning tasks because they strengthen positive, activating emotions. However, most studies on affordances limit…

  7. Computer-Based Tools for Evaluating Graphical User Interfaces

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  8. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    PubMed Central

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  9. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    PubMed

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  10. On the control of brain-computer interfaces by users with cerebral palsy.

    PubMed

    Daly, Ian; Billinger, Martin; Laparra-Hernández, José; Aloise, Fabio; García, Mariano Lloria; Faller, Josef; Scherer, Reinhold; Müller-Putz, Gernot

    2013-09-01

    Brain-computer interfaces (BCIs) have been proposed as a potential assistive device for individuals with cerebral palsy (CP) to assist with their communication needs. However, it is unclear how well-suited BCIs are to individuals with CP. Therefore, this study aims to investigate to what extent these users are able to gain control of BCIs. This study is conducted with 14 individuals with CP attempting to control two standard online BCIs (1) based upon sensorimotor rhythm modulations, and (2) based upon steady state visual evoked potentials. Of the 14 users, 8 are able to use one or other of the BCIs, online, with a statistically significant level of accuracy, without prior training. Classification results are driven by neurophysiological activity and not seen to correlate with occurrences of artifacts. However, many of these users' accuracies, while statistically significant, would require either more training or more advanced methods before practical BCI control would be possible. The results indicate that BCIs may be controlled by individuals with CP but that many issues need to be overcome before practical application use may be achieved. This is the first study to assess the ability of a large group of different individuals with CP to gain control of an online BCI system. The results indicate that six users could control a sensorimotor rhythm BCI and three a steady state visual evoked potential BCI at statistically significant levels of accuracy (SMR accuracies; mean ± STD, 0.821 ± 0.116, SSVEP accuracies; 0.422 ± 0.069). Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  12. An operator interface design for a telerobotic inspection system

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tso, Kam S.; Hayati, Samad

    1993-01-01

    The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  13. The Modular Aero-Propulsion System Simulation (MAPSS) Users' Guide

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Melcher, Kevin J.

    2004-01-01

    The Modular Aero-Propulsion System Simulation is a flexible turbofan engine simulation environment that provides the user a platform to develop advanced control algorithms. It is capable of testing the performance of control designs on a validated and verified generic engine model. In addition, it is able to generate state-space linear models of the engine model to aid in controller design. The engine model used in MAPSS is a generic high-pressure ratio, dual-spool, lowbypass, military-type, variable cycle turbofan engine with a digital controller. MAPSS is controlled by a graphical user interface (GUI) and this guide explains how to use it to take advantage of the capabilities of MAPSS.

  14. Exploring a Net Centric Architecture Using the Net Warrior Airborne Early Warning and Control Node

    DTIC Science & Technology

    2007-12-01

    implemented in different languages. Customisation Interfaces for customising components. User-friendly customisation tools will use these interfaces...Sun Enterprise Java Beans. Customisation Customisation in the context of components is defined in [Heineman & Councill 2001, p. 42] as ‘…the ability...of a consumer to adapt a component prior to its installation or use’. Customisation can be facilitated through the use of specialised interfaces

  15. Control of a nursing bed based on a hybrid brain-computer interface.

    PubMed

    Nengneng Peng; Rui Zhang; Haihua Zeng; Fei Wang; Kai Li; Yuanqing Li; Xiaobin Zhuang

    2016-08-01

    In this paper, we propose an intelligent nursing bed system which is controlled by a hybrid brain-computer interface (BCI) involving steady-state visual evoked potential (SSVEP) and P300. Specifically, the hybrid BCI includes an asynchronous brain switch based on SSVEP and P300, and a P300-based BCI. The brain switch is used to turn on/off the control system of the electric nursing bed through idle/control state detection, whereas the P300-based BCI is for operating the nursing bed. At the beginning, the user may focus on one group of flashing buttons in the graphic user interface (GUI) of the brain switch, which can simultaneously evoke SSVEP and P300, to switch on the control system. Here, the combination of SSVEP and P300 is used for improving the performance of the brain switch. Next, the user can control the nursing bed using the P300-based BCI. The GUI of the P300-based BCI includes 10 flashing buttons, which correspond to 10 functional operations, namely, left-side up, left-side down, back up, back down, bedpan open, bedpan close, legs up, legs down, right-side up, and right-side down. For instance, he/she can focus on the flashing button "back up" in the GUI of the P300-based BCI to activate the corresponding control such that the nursing bed is adjusted up. Eight healthy subjects participated in our experiment, and obtained an average accuracy of 93.75% and an average false positive rate (FPR) of 0.15 event/min. The effectiveness of our system was thus demonstrated.

  16. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  17. Identification and control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.; Singh, N. B.

    1992-01-01

    This paper presents an intelligent adaptive control system for the control of a solid-liquid interface of a crystal while it is growing via directional solidification inside a multizone transparent furnace. The task of the process controller is to establish a user-specified axial temperature profile and to maintain a desirable interface shape. Both single-input-single-output and multi-input-multi-output adaptive pole placement algorithms have been used to control the temperature. Also described is an intelligent measurement system to assess the shape of the crystal while it is growing. A color video imaging system observes the crystal in real time and determines the position and the shape of the interface. This information is used to evaluate the crystal growth rate, and to analyze the effects of translational velocity and temperature profiles on the shape of the interface. Creation of this knowledge base is the first step to incorporate image processing into furnace control.

  18. An EOG-Based Human-Machine Interface for Wheelchair Control.

    PubMed

    Huang, Qiyun; He, Shenghong; Wang, Qihong; Gu, Zhenghui; Peng, Nengneng; Li, Kai; Zhang, Yuandong; Shao, Ming; Li, Yuanqing

    2017-07-27

    Non-manual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG)-based HMI is proposed for wheelchair control. Thirteen flashing buttons are presented in the graphical user interface (GUI), and each of the buttons corresponds to a command. These buttons flash on a one-by-one manner in a pre-defined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. For healthy subjects/patients with spinal cord injuries (SCIs), the proposed HMI achieved an average accuracy of 96.7%/91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Using only one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. The proposed HMI provides a novel non-manual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR and fewer electrodes.

  19. MAGMA: analysis of two-channel microarrays made easy.

    PubMed

    Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph

    2007-07-01

    The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.

  20. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury.

    PubMed

    Martin, Suzanne; Armstrong, Elaine; Thomson, Eileen; Vargiu, Eloisa; Solà, Marc; Dauwalder, Stefan; Miralles, Felip; Daly Lynn, Jean

    2017-07-14

    Cognitive rehabilitation is established as a core intervention within rehabilitation programs following a traumatic brain injury (TBI). Digitally enabled assistive technologies offer opportunities for clinicians to increase remote access to rehabilitation supporting transition into home. Brain Computer Interface (BCI) systems can harness the residual abilities of individuals with limited function to gain control over computers through their brain waves. This paper presents an online cognitive rehabilitation application developed with therapists, to work remotely with people who have TBI, who will use BCI at home to engage in the therapy. A qualitative research study was completed with people who are community dwellers post brain injury (end users), and a cohort of therapists involved in cognitive rehabilitation. A user-centered approach over three phases in the development, design and feasibility testing of this cognitive rehabilitation application included two tasks (Find-a-Category and a Memory Card task). The therapist could remotely prescribe activity with different levels of difficulty. The service user had a home interface which would present the therapy activities. This novel work was achieved by an international consortium of academics, business partners and service users.

  1. Serial Interface through Stream Protocol on EPICS Platform for Distributed Control and Monitoring

    NASA Astrophysics Data System (ADS)

    Das Gupta, Arnab; Srivastava, Amit K.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    Remote operation of any equipment or device is implemented in distributed systems in order to control and proper monitoring of process values. For such remote operations, Experimental Physics and Industrial Control System (EPICS) is used as one of the important software tool for control and monitoring of a wide range of scientific parameters. A hardware interface is developed for implementation of EPICS software so that different equipment such as data converters, power supplies, pump controllers etc. could be remotely operated through stream protocol. EPICS base was setup on windows as well as Linux operating system for control and monitoring while EPICS modules such as asyn and stream device were used to interface the equipment with standard RS-232/RS-485 protocol. Stream Device protocol communicates with the serial line with an interface to asyn drivers. Graphical user interface and alarm handling were implemented with Motif Editor and Display Manager (MEDM) and Alarm Handler (ALH) command line channel access utility tools. This paper will describe the developed application which was tested with different equipment and devices serially interfaced to the PCs on a distributed network.

  2. 10 CFR Appendix A1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device incorporated into the... interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5“Automatic defrost” means a...

  3. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  4. 10 CFR Appendix B1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... defined in HRF-1-1979 in cubic feet, times (2) an adjustment factor. 1.2 “Anti-sweat heater” means a... interior surfaces of the cabinet. 1.3 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.4 “Automatic Defrost” means a...

  5. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3 “Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  6. 10 CFR Appendix B1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... defined in HRF-1-1979 in cubic feet, times (2) an adjustment factor. 1.2“Anti-sweat heater” means a device... surfaces of the cabinet. 1.3“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.4“Automatic Defrost” means a system in...

  7. 10 CFR Appendix B1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... defined in HRF-1-1979 in cubic feet, times (2) an adjustment factor. 1.2“Anti-sweat heater” means a device... surfaces of the cabinet. 1.3“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.4“Automatic Defrost” means a system in...

  8. 10 CFR Appendix A1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... less for the freezing and storage of ice. 1.3 “Anti-sweat heater” means a device incorporated into the... interior surfaces of the cabinet. 1.4 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5 “Automatic defrost” means a...

  9. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  10. Draper Station Analysis Tool

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip

    2011-01-01

    Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.

  11. The Technology Information Environment with Industry{trademark} system description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detry, R.; Machin, G.

    The Technology Information Environment with Industry (TIE-In{trademark}) provides users with controlled access to distributed laboratory resources that are packaged in intelligent user interfaces. These interfaces help users access resources without requiring the user to have technical or computer expertise. TIE-In utilizes existing, proven technologies such as the Kerberos authentication system, X-Windows, and UNIX sockets. A Front End System (FES) authenticates users and allows them to register for resources and subsequently access them. The FES also stores status and accounting information, and provides an automated method for the resource owners to recover costs from users. The resources available through TIE-In aremore » typically laboratory-developed applications that are used to help design, analyze, and test components in the nation`s nuclear stockpile. Many of these applications can also be used by US companies for non-weapons-related work. TIE-In allows these industry partners to obtain laboratory-developed technical solutions without requiring them to duplicate the technical resources (people, hardware, and software) at Sandia.« less

  12. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  13. Development of the Software for 30 inch Telescope Control System at KHAO

    NASA Astrophysics Data System (ADS)

    Mun, B.-S.; Kim, S.-J.; Jang, M.; Min, S.-W.; Seol, K.-H.; Moon, K.-S.

    2006-12-01

    Even though 30inch optical telescope at Kyung Hee Astronomy Observatory has been used to produce a series of scientific achievements since its first light in 1992, numerous difficulties in the operation of the telescope have hindered the precise observations needed for further researches. Since the currently used PC-TCS (Personal Computer based Telescope Control system) software based on ISA-bus type is outdated, it doesn't have a user friendly interface and make it impossible to scale. Also accumulated errors which are generated by discordance from input and output signals into a motion controller required new control system. Thus we have improved the telescope control system by updating software and modifying mechanical parts. We applied a new BLDC (brushless DC) servo motor system to the mechanical parts of the telescope and developed a control software using Visual Basic 6.0. As a result, we could achieve a high accuracy in controlling of the telescope and use the userfriendly GUI (Graphic User Interface).

  14. Conversion of the TRACON operations concepts database into a formal sentence outline job task taxonomy.

    DOT National Transportation Integrated Search

    1995-05-01

    FAA Air Traffic Control Operations Concepts Volume VII.- TRACON Controllers (1989) developed by CTA, Inc., a technical description of the duties of a TRACON air traffic control specialist (ATCS), formatted in User Interface Language, was restructured...

  15. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  16. A Web Service and Interface for Remote Electronic Device Characterization

    ERIC Educational Resources Information Center

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  17. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  18. Multi-degree of freedom joystick for virtual reality simulation.

    PubMed

    Head, M J; Nelson, C A; Siu, K C

    2013-11-01

    A modular control interface and simulated virtual reality environment were designed and created in order to determine how the kinematic architecture of a control interface affects minimally invasive surgery training. A user is able to selectively determine the kinematic configuration of an input device (number, type and location of degrees of freedom) for a specific surgical simulation through the use of modular joints and constraint components. Furthermore, passive locking was designed and implemented through the use of inflated latex tubing around rotational joints in order to allow a user to step away from a simulation without unwanted tool motion. It is believed that these features will facilitate improved simulation of a variety of surgical procedures and, thus, improve surgical skills training.

  19. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutines for numerical analysis. 5) Graphics - The graphics package IPLOT is included in IAC. IPLOT generates vector displays of tabular data in the form of curves, charts, correlation tables, etc. Either DI3000 or PLOT-10 graphics software is required for full graphic capability. In addition to these analysis tools, IAC 2.5 contains an IGES interface which allows the user to read arbitrary IGES files into an IAC database and to edit and output new IGES files. IAC is available by license for a period of 10 years to approved U.S. licensees. The licensed program product includes one set of supporting documentation. Additional copies may be purchased separately. IAC is written in FORTRAN 77 and has been implemented on a DEC VAX series computer operating under VMS. IAC can be executed by multiple concurrent users in batch or interactive mode. The program is structured to allow users to easily delete those program capabilities and "how to" examples they do not want in order to reduce the size of the package. The basic central memory requirement for IAC is approximately 750KB. The following programs are also available from COSMIC as separate packages: NASTRAN, SINDA/SINFLO, TRASYS II, DISCOS, ORACLS, SAMSAN, NBOD2, and INCA. The development of level 2.5 of IAC was completed in 1989.

  20. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  1. ANALOG I/O MODULE TEST SYSTEM BASED ON EPICS CA PROTOCOL AND ACTIVEX CA INTERFACE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    YENG,YHOFF,L.

    2003-10-13

    Analog input (ADC) and output (DAC) modules play a substantial role in device level control of accelerator and large experiment physics control system. In order to get the best performance some features of analog modules including linearity, accuracy, crosstalk, thermal drift and so on have to be evaluated during the preliminary design phase. Gain and offset error calibration and thermal drift compensation (if needed) may have to be done in the implementation phase as well. A natural technique for performing these tasks is to interface the analog VO modules and GPIB interface programmable test instruments with a computer, which canmore » complete measurements or calibration automatically. A difficulty is that drivers of analog modules and test instruments usually work on totally different platforms (vxworks VS Windows). Developing new test routines and drivers for testing instruments under VxWorks (or any other RTOS) platform is not a good solution because such systems have relatively poor user interface and developing such software requires substantial effort. EPICS CA protocol and ActiveX CA interface provide another choice, a PC and LabVIEW based test system. Analog 110 module can be interfaced from LabVIEW test routines via ActiveX CA interface. Test instruments can be controlled via LabVIEW drivers, most of which are provided by instrument vendors or by National Instruments. Labview also provides extensive data analysis and process functions. Using these functions, users can generate powerful test routines very easily. Several applications built for Spallation Neutron Source (SNS) Beam Loss Monitor (BLM) system are described in this paper.« less

  2. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.

  3. Design of a telescope control system using an ARM microcontroller with embedded RTOS

    NASA Astrophysics Data System (ADS)

    Peñuela Pico, Cristian R.; Atara Montañez, Fabian A.; Cuervo, Juan C.; Gonzalez-Llorente, Jesus

    2014-08-01

    This work presents the design of a wireless control system that allows driving all the necessary instruments to control the orientation of an equatorial mounting telescope through a real time operative system (RTOS) that runs over ARM microcontroller. The control system is commanded through a user-interface which works under Android platform giving the user the option to control the tracking mode, right ascension, and declination. The system was successfully deployed and tested during a one-hour observation of the Moon. The frequency measured by the oscilloscope is 66.67 Hz which equals the sidereal speed. The telescope control systems allows the user to have a better precision when locating a star but also to cover long-duration tracking processes

  4. Enabling end-user network monitoring via the multicast consolidated proxy monitor

    NASA Astrophysics Data System (ADS)

    Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew

    2001-07-01

    The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.

  5. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  6. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  7. Control of a 2 DoF robot using a brain-machine interface.

    PubMed

    Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M

    2014-09-01

    In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Non-invasive brain-computer interface system: towards its application as assistive technology.

    PubMed

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2008-04-15

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain-computer interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI.

  9. Non invasive Brain-Computer Interface system: towards its application as assistive technology

    PubMed Central

    Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Schalk, Gerwin; Oriolo, Giuseppe; Cherubini, Andrea; Marciani, Maria Grazia; Babiloni, Fabio

    2010-01-01

    The quality of life of people suffering from severe motor disabilities can benefit from the use of current assistive technology capable of ameliorating communication, house-environment management and mobility, according to the user's residual motor abilities. Brain Computer Interfaces (BCIs) are systems that can translate brain activity into signals that control external devices. Thus they can represent the only technology for severely paralyzed patients to increase or maintain their communication and control options. Here we report on a pilot study in which a system was implemented and validated to allow disabled persons to improve or recover their mobility (directly or by emulation) and communication within the surrounding environment. The system is based on a software controller that offers to the user a communication interface that is matched with the individual's residual motor abilities. Patients (n=14) with severe motor disabilities due to progressive neurodegenerative disorders were trained to use the system prototype under a rehabilitation program carried out in a house-like furnished space. All users utilized regular assistive control options (e.g., microswitches or head trackers). In addition, four subjects learned to operate the system by means of a non-invasive EEG-based BCI. This system was controlled by the subjects' voluntary modulations of EEG sensorimotor rhythms recorded on the scalp; this skill was learnt even though the subjects have not had control over their limbs for a long time. We conclude that such a prototype system, which integrates several different assistive technologies including a BCI system, can potentially facilitate the translation from pre-clinical demonstrations to a clinical useful BCI. PMID:18394526

  10. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    PubMed Central

    Víctor Rodrigo, Mercado-García

    2017-01-01

    Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861

  11. A reliable user authentication and key agreement scheme for Web-based Hospital-acquired Infection Surveillance Information System.

    PubMed

    Wu, Zhen-Yu; Tseng, Yi-Ju; Chung, Yufang; Chen, Yee-Chun; Lai, Feipei

    2012-08-01

    With the rapid development of the Internet, both digitization and electronic orientation are required on various applications in the daily life. For hospital-acquired infection control, a Web-based Hospital-acquired Infection Surveillance System was implemented. Clinical data from different hospitals and systems were collected and analyzed. The hospital-acquired infection screening rules in this system utilized this information to detect different patterns of defined hospital-acquired infection. Moreover, these data were integrated into the user interface of a signal entry point to assist physicians and healthcare providers in making decisions. Based on Service-Oriented Architecture, web-service techniques which were suitable for integrating heterogeneous platforms, protocols, and applications, were used. In summary, this system simplifies the workflow of hospital infection control and improves the healthcare quality. However, it is probable for attackers to intercept the process of data transmission or access to the user interface. To tackle the illegal access and to prevent the information from being stolen during transmission over the insecure Internet, a password-based user authentication scheme is proposed for information integrity.

  12. Dose controlled low energy electron irradiator for biomolecular films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, S. V. K., E-mail: svkk@tifr.res.in; Tare, Satej T.; Upalekar, Yogesh V.

    2016-03-15

    We have developed a multi target, Low Energy Electron (LEE), precise dose controlled irradiator for biomolecular films. Up to seven samples can be irradiated one after another at any preset electron energy and dose under UHV conditions without venting the chamber. In addition, one more sample goes through all the steps except irradiation, which can be used as control for comparison with the irradiated samples. All the samples are protected against stray electron irradiation by biasing them at −20 V during the entire period, except during irradiation. Ethernet based communication electronics hardware, LEE beam control electronics and computer interface weremore » developed in house. The user Graphical User Interface to control the irradiation and dose measurement was developed using National Instruments Lab Windows CVI. The working and reliability of the dose controlled irradiator has been fully tested over the electron energy range of 0.5 to 500 eV by studying LEE induced single strand breaks to ΦX174 RF1 dsDNA.« less

  13. A Brain-Computer Interface (BCI) system to use arbitrary Windows applications by directly controlling mouse and keyboard.

    PubMed

    Spuler, Martin

    2015-08-01

    A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.

  14. Application of SQL database to the control system of MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo

    2006-06-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.

  15. CARE 3 user-friendly interface user's guide

    NASA Technical Reports Server (NTRS)

    Martensen, A. L.

    1987-01-01

    CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.

  16. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Walter R.

    CMS is a Windows application for tracking chemical inventories. Partners will use this application to record chemicals that are stored on their site and to perform periodic inventories of those chemicals. The application records information about stored chemicals from user input via the keyboard and barcode readers and stores that information into a single-file database (SQLite). A simple user login mechanism is used to control access to functions in the application. A user interface is provided that allows users to search the database and update data in the database.

  18. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems.

    PubMed

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.

  19. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika

    2017-06-01

    Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.

  20. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  1. Measurement of user performance and attitudes assists the initial design of a computer user display and orientation method.

    PubMed

    Chase, C R; Ashikaga, T; Mazuzan, J E

    1994-07-01

    The objective of our study was to assess the acceptability of a proposed user interface to visually interfaced computer-assisted anesthesia record (VISI-CAARE), before the application was begun. The user interface was defined as the user display and its user orientation methods. We designed methods to measure user performance and attitude toward two different anesthesia record procedures: (1) the traditional pen and paper anesthetic record procedure of our hospital, and (2) VISI-CAARE. Performance measurements included the reaction speed (identifying the type and time of an event) and completion speed (describing the event). Performance also included accuracy of the recorded time of the event and accuracy of the description. User attitude was measured by (1) the physician's rating on a scale of 0 to 9 of the potential usefulness of computers in anesthesia care; (2) willingness to use the future application in the clinical environment; and (3) user suggestions for change. These measurements were used in a randomized trial of 21 physicians, of which data from 20 were available. After exposure to VISI-CAARE, the experimental subjects' ranking of computer usefulness in anesthesia care improved significantly (4.2 +/- 1.1 to 7.6 +/- 1.5, p = 0.0001), as did controls' (5.2 +/- 2.6 to 8 +/- 1.5, p = 0.0019). All the volunteers were willing to try the proposed prototype clinically, when it was ready. VISI-CAARE exposure was associated with faster and more accurate reaction to events over the traditional pen and paper machine, and slower and more accurate description of events in an artificial mock setting. VISI-CAARE 1.1 demonstrated significant improvements in both reaction speed and completion speed over VISI-CAARE 1.0, after changes were made to the user display and orientation methods. With graphic user interface prototyping environments, one can obtain preliminary user attitude and performance data, even before application programming is begun. This may be helpful in revising initial display and orientation methods, while obtaining user interest and commitment before actual programming and clinical testing.

  2. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  3. Research on an expert system for database operation of simulation-emulation math models. Volume 2, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    A reference manual is provided for NESS, a simulation expert system. This manual gives user information regarding starting and operating NASA expert simulation system (NESS). This expert system provides an intelligent interface to a generic simulation program for spacecraft attitude control problems. A menu of the functions the system can perform is provided. Control repeated returns to this menu after executing each user request.

  4. User Experience May be Producing Greater Heart Rate Variability than Motor Imagery Related Control Tasks during the User-System Adaptation in Brain-Computer Interfaces

    PubMed Central

    Alonso-Valerdi, Luz M.; Gutiérrez-Begovich, David A.; Argüello-García, Janet; Sepulveda, Francisco; Ramírez-Mendoza, Ricardo A.

    2016-01-01

    Brain-computer interface (BCI) is technology that is developing fast, but it remains inaccurate, unreliable and slow due to the difficulty to obtain precise information from the brain. Consequently, the involvement of other biosignals to decode the user control tasks has risen in importance. A traditional way to operate a BCI system is via motor imagery (MI) tasks. As imaginary movements activate similar cortical structures and vegetative mechanisms as a voluntary movement does, heart rate variability (HRV) has been proposed as a parameter to improve the detection of MI related control tasks. However, HR is very susceptible to body needs and environmental demands, and as BCI systems require high levels of attention, perceptual processing and mental workload, it is important to assess the practical effectiveness of HRV. The present study aimed to determine if brain and heart electrical signals (HRV) are modulated by MI activity used to control a BCI system, or if HRV is modulated by the user perceptions and responses that result from the operation of a BCI system (i.e., user experience). For this purpose, a database of 11 participants who were exposed to eight different situations was used. The sensory-cognitive load (intake and rejection tasks) was controlled in those situations. Two electrophysiological signals were utilized: electroencephalography and electrocardiography. From those biosignals, event-related (de-)synchronization maps and event-related HR changes were respectively estimated. The maps and the HR changes were cross-correlated in order to verify if both biosignals were modulated due to MI activity. The results suggest that HR varies according to the experience undergone by the user in a BCI working environment, and not because of the MI activity used to operate the system. PMID:27458384

  5. User interface design principles for the SSM/PMAD automated power system

    NASA Technical Reports Server (NTRS)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  6. TELICS—A Telescope Instrument Control System for Small/Medium Sized Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Srivastava, Mudit K.; Ramaprakash, A. N.; Burse, Mahesh P.; Chordia, Pravin A.; Chillal, Kalpesh S.; Mestry, Vilas B.; Das, Hillol K.; Kohok, Abhay A.

    2009-10-01

    For any modern astronomical observatory, it is essential to have an efficient interface between the telescope and its back-end instruments. However, for small and medium-sized observatories, this requirement is often limited by tight financial constraints. Therefore a simple yet versatile and low-cost control system is required for such observatories to minimize cost and effort. Here we report the development of a modern, multipurpose instrument control system TELICS (Telescope Instrument Control System) to integrate the controls of various instruments and devices mounted on the telescope. TELICS consists of an embedded hardware unit known as a common control unit (CCU) in combination with Linux-based data acquisition and user interface. The hardware of the CCU is built around the ATmega 128 microcontroller (Atmel Corp.) and is designed with a backplane, master-slave architecture. A Qt-based graphical user interface (GUI) has been developed and the back-end application software is based on C/C++. TELICS provides feedback mechanisms that give the operator good visibility and a quick-look display of the status and modes of instruments as well as data. TELICS has been used for regular science observations since 2008 March on the 2 m, f/10 IUCAA Telescope located at Girawali in Pune, India.

  7. Application of a single-flicker online SSVEP BCI for spatial navigation.

    PubMed

    Chen, Jingjing; Zhang, Dan; Engel, Andreas K; Gong, Qin; Maye, Alexander

    2017-01-01

    A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject's offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation.

  8. Developing a Graphical User Interface for the ALSS Crop Planning Tool

    NASA Technical Reports Server (NTRS)

    Koehlert, Erik

    1997-01-01

    The goal of my project was to create a graphical user interface for a prototype crop scheduler. The crop scheduler was developed by Dr. Jorge Leon and Laura Whitaker for the ALSS (Advanced Life Support System) program. The addition of a system-independent graphical user interface to the crop planning tool will make the application more accessible to a wider range of users and enhance its value as an analysis, design, and planning tool. My presentation will demonstrate the form and functionality of this interface. This graphical user interface allows users to edit system parameters stored in the file system. Data on the interaction of the crew, crops, and waste processing system with the available system resources is organized and labeled. Program output, which is stored in the file system, is also presented to the user in performance-time plots and organized charts. The menu system is designed to guide the user through analysis and decision making tasks, providing some help if necessary. The Java programming language was used to develop this interface in hopes of providing portability and remote operation.

  9. Natural interaction for unmanned systems

    NASA Astrophysics Data System (ADS)

    Taylor, Glenn; Purman, Ben; Schermerhorn, Paul; Garcia-Sampedro, Guillermo; Lanting, Matt; Quist, Michael; Kawatsu, Chris

    2015-05-01

    Military unmanned systems today are typically controlled by two methods: tele-operation or menu-based, search-andclick interfaces. Both approaches require the operator's constant vigilance: tele-operation requires constant input to drive the vehicle inch by inch; a menu-based interface requires eyes on the screen in order to search through alternatives and select the right menu item. In both cases, operators spend most of their time and attention driving and minding the unmanned systems rather than on being a warfighter. With these approaches, the platform and interface become more of a burden than a benefit. The availability of inexpensive sensor systems in products such as Microsoft Kinect™ or Nintendo Wii™ has resulted in new ways of interacting with computing systems, but new sensors alone are not enough. Developing useful and usable human-system interfaces requires understanding users and interaction in context: not just what new sensors afford in terms of interaction, but how users want to interact with these systems, for what purpose, and how sensors might enable those interactions. Additionally, the system needs to reliably make sense of the user's inputs in context, translate that interpretation into commands for the unmanned system, and give feedback to the user. In this paper, we describe an example natural interface for unmanned systems, called the Smart Interaction Device (SID), which enables natural two-way interaction with unmanned systems including the use of speech, sketch, and gestures. We present a few example applications SID to different types of unmanned systems and different kinds of interactions.

  10. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  11. Brain-computer interface controlled gaming: evaluation of usability by severely motor restricted end-users.

    PubMed

    Holz, Elisa Mira; Höhne, Johannes; Staiger-Sälzer, Pit; Tangermann, Michael; Kübler, Andrea

    2013-10-01

    Connect-Four, a new sensorimotor rhythm (SMR) based brain-computer interface (BCI) gaming application, was evaluated by four severely motor restricted end-users; two were in the locked-in state and had unreliable eye-movement. Following the user-centred approach, usability of the BCI prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate (ITR) and subjective workload) and users' satisfaction. Online performance varied strongly across users and sessions (median accuracy (%) of end-users: A=.65; B=.60; C=.47; D=.77). Our results thus yielded low to medium effectiveness in three end-users and high effectiveness in one end-user. Consequently, ITR was low (0.05-1.44bits/min). Only two end-users were able to play the game in free-mode. Total workload was moderate but varied strongly across sessions. Main sources of workload were mental and temporal demand. Furthermore, frustration contributed to the subjective workload of two end-users. Nevertheless, most end-users accepted the BCI application well and rated satisfaction medium to high. Sources for dissatisfaction were (1) electrode gel and cap, (2) low effectiveness, (3) time-consuming adjustment and (4) not easy-to-use BCI equipment. All four end-users indicated ease of use as being one of the most important aspect of BCI. Effectiveness and efficiency are lower as compared to applications using the event-related potential as input channel. Nevertheless, the SMR-BCI application was satisfactorily accepted by the end-users and two of four could imagine using the BCI application in their daily life. Thus, despite moderate effectiveness and efficiency BCIs might be an option when controlling an application for entertainment. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Hybrid P300-based brain-computer interface to improve usability for people with severe motor disability: electromyographic signals for error correction during a spelling task.

    PubMed

    Riccio, Angela; Holz, Elisa Mira; Aricò, Pietro; Leotta, Francesco; Aloise, Fabio; Desideri, Lorenzo; Rimondini, Matteo; Kübler, Andrea; Mattia, Donatella; Cincotti, Febo

    2015-03-01

    To evaluate the impact of a hybrid control on usability of a P300-based brain-computer interface (BCI) system that was designed to control an assistive technology software and was integrated with an electromyographic channel for error correction. Proof-of-principle study with a convenience sample. Neurologic rehabilitation hospital. Participants (N=11) in this pilot study included healthy (n=8) and severely motor impaired (n=3) persons. The 3 people with severe motor disability were identified as potential candidates to benefit from the proposed hybrid BCI system for communication and environmental interaction. To eventually investigate the improvement in usability, we compared 2 modalities of BCI system control: a P300-based and a hybrid P300 electromyographic-based mode of control. System usability was evaluated according to the following outcome measures within 3 domains: (1) effectiveness (overall system accuracy and P300-based BCI accuracy); (2) efficiency (throughput time and users' workload); and (3) satisfaction (users' satisfaction). We also considered the information transfer rate and time for selection. Findings obtained in healthy participants were in favor of a higher usability of the hybrid control as compared with the nonhybrid. A similar trend was indicated by the observational results gathered from each of the 3 potential end-users. The proposed hybrid BCI control modality could provide end-users with severe motor disability with an option to exploit some residual muscular activity, which could not be fully reliable for properly controlling an assistive technology device. The findings reported in this pilot study encourage the implementation of a clinical trial involving a large cohort of end-users. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. The need for separate operational and engineering user interfaces for command and control of airborne synthetic aperture radar systems

    NASA Astrophysics Data System (ADS)

    Klein, Laura M.; McNamara, Laura A.

    2017-05-01

    In this paper, we address the needed components to create usable engineering and operational user interfaces (UIs) for airborne Synthetic Aperture Radar (SAR) systems. As airborne SAR technology gains wider acceptance in the remote sensing and Intelligence, Surveillance, and Reconnaissance (ISR) communities, the need for effective and appropriate UIs to command and control these sensors has also increased. However, despite the growing demand for SAR in operational environments, the technology still faces an adoption roadblock, in large part due to the lack of effective UIs. It is common to find operational interfaces that have barely grown beyond the disparate tools engineers and technologists developed to demonstrate an initial concept or system. While sensor usability and utility are common requirements to engineers and operators, their objectives for interacting with the sensor are different. As such, the amount and type of information presented ought to be tailored to the specific application.

  14. Applications of graphics to support a testbed for autonomous space vehicle operations

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.; Aldridge, J. P.; Benson, S.; Horner, S.; Kullman, A.; Mulder, T.; Parrott, W.; Roman, D.; Watts, G.; Bochsler, Daniel C.

    1989-01-01

    Researchers describe their experience using graphics tools and utilities while building an application, AUTOPS, that uses a graphical Machintosh (TM)-like interface for the input and display of data, and animation graphics to enhance the presentation of results of autonomous space vehicle operations simulations. AUTOPS is a test bed for evaluating decisions for intelligent control systems for autonomous vehicles. Decisions made by an intelligent control system, e.g., a revised mission plan, might be displayed to the user in textual format or he can witness the effects of those decisions via out of window graphics animations. Although a textual description conveys essentials, a graphics animation conveys the replanning results in a more convincing way. Similarily, iconic and menu-driven screen interfaces provide the user with more meaningful options and displays. Presented here are experiences with the SunView and TAE Plus graphics tools used for interface design, and the Johnson Space Center Interactive Graphics Laboratory animation graphics tools used for generating out out of the window graphics.

  15. Overview of Graphical User Interfaces.

    ERIC Educational Resources Information Center

    Hulser, Richard P.

    1993-01-01

    Discussion of graphical user interfaces for online public access catalogs (OPACs) covers the history of OPACs; OPAC front-end design, including examples from Indiana University and the University of Illinois; and planning and implementation of a user interface. (10 references) (EA)

  16. StarView: The object oriented design of the ST DADS user interface

    NASA Technical Reports Server (NTRS)

    Williams, J. D.; Pollizzi, J. A.

    1992-01-01

    StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.

  17. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  18. CLIPS application user interface for the PC

    NASA Technical Reports Server (NTRS)

    Jenkins, Jim; Holbrook, Rebecca; Shewhart, Mark; Crouse, Joey; Yarost, Stuart

    1991-01-01

    The majority of applications that utilize expert system development programs for their knowledge representation and inferencing capability require some form of interface with the end user. This interface is more than likely an interaction through the computer screen. When building an application the user interface can prove to be the most difficult and time consuming aspect to program. Commercial products currently exist which address this issue. To keep pace C Language Integrated Production System (CLIPS) will need to find a solution for their lack of an easy to use Application User Interface (AUI). This paper represents a survey of the DoD CLIPS' user community and provides the backbone of a possible solution.

  19. A user interface development tool for space science systems Transportable Applications Environment (TAE) Plus

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1990-01-01

    The Transportable Applications Environment Plus (TAE PLUS), developed at NASA's Goddard Space Flight Center, is a portable What You See Is What You Get (WYSIWYG) user interface development and management system. Its primary objective is to provide an integrated software environment that allows interactive prototyping and development that of user interfaces, as well as management of the user interface within the operational domain. Although TAE Plus is applicable to many types of applications, its focus is supporting user interfaces for space applications. This paper discusses what TAE Plus provides and how the implementation has utilized state-of-the-art technologies within graphic workstations, windowing systems and object-oriented programming languages.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foslien, Wendy K.; Curtner, Keith L.

    Because of growing energy demands and shortages, residential home owners are turning to energy conservation measures and smart home energy management devices to help them reduce energy costs and live more sustainably. In this context, the Honeywell team researched, developed, and tested the Context Aware Smart Home Energy Manager (CASHEM) as a trusted advisor for home energy management. The project focused on connecting multiple devices in a home through a uniform user interface. The design of the user interface was an important feature of the project because it provided a single place for the homeowner to control all devices andmore » was also where they received coaching. CASHEM then used data collected from homes to identify the contexts that affect operation of home appliances. CASHEM's goal was to reduce energy consumption while keeping the user's key needs satisfied. Thus, CASHEM was intended to find the opportunities to minimize energy consumption in a way that fit the user's lifestyle.« less

  1. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  2. Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0

    NASA Technical Reports Server (NTRS)

    Knox, J. C.

    1996-01-01

    The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.

  3. Human/Computer Interfacing in Educational Environments.

    ERIC Educational Resources Information Center

    Sarti, Luigi

    1992-01-01

    This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…

  4. Developing the Multimedia User Interface Component (MUSIC) for the Icarus Presentation System (IPS)

    DTIC Science & Technology

    1993-12-01

    AD-A276 341 In-House Report December 1993 DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) FOR THE ICARUS PRESENTATION SYSTEM (IPS) Ingrid...DATEs COVERED 7 December 1993 Ina-House Jun - Aug 93 4 TWLE AM SL1sM1E & FUNDING NUMBERS DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) PE...the Multimedia User Interface Component ( MUSIC ). This report documents the initial research, design and implementation of a prototype of the MUSIC

  5. Development and evaluation of nursing user interface screens using multiple methods.

    PubMed

    Hyun, Sookyung; Johnson, Stephen B; Stetson, Peter D; Bakken, Suzanne

    2009-12-01

    Building upon the foundation of the Structured Narrative Electronic Health Record (EHR) model, we applied theory-based (combined Technology Acceptance Model and Task-Technology Fit Model) and user-centered methods to explore nurses' perceptions of functional requirements for an electronic nursing documentation system, design user interface screens reflective of the nurses' perspectives, and assess nurses' perceptions of the usability of the prototype user interface screens. The methods resulted in user interface screens that were perceived to be easy to use, potentially useful, and well-matched to nursing documentation tasks associated with Nursing Admission Assessment, Blood Administration, and Nursing Discharge Summary. The methods applied in this research may serve as a guide for others wishing to implement user-centered processes to develop or extend EHR systems. In addition, some of the insights obtained in this study may be informative to the development of safe and efficient user interface screens for nursing document templates in EHRs.

  6. LabVIEW interface with Tango control system for a multi-technique X-ray spectrometry IAEA beamline end-station at Elettra Sincrotrone Trieste

    NASA Astrophysics Data System (ADS)

    Wrobel, P. M.; Bogovac, M.; Sghaier, H.; Leani, J. J.; Migliori, A.; Padilla-Alvarez, R.; Czyzycki, M.; Osan, J.; Kaiser, R. B.; Karydas, A. G.

    2016-10-01

    A new synchrotron beamline end-station for multipurpose X-ray spectrometry applications has been recently commissioned and it is currently accessible by end-users at the XRF beamline of Elettra Sincrotrone Trieste. The end-station consists of an ultra-high vacuum chamber that includes as main instrument a seven-axis motorized manipulator for sample and detectors positioning, different kinds of X-ray detectors and optical cameras. The beamline end-station allows performing measurements in different X-ray spectrometry techniques such as Microscopic X-Ray Fluorescence analysis (μXRF), Total Reflection X-Ray Fluorescence analysis (TXRF), Grazing Incidence/Exit X-Ray Fluorescence analysis (GI-XRF/GE-XRF), X-Ray Reflectometry (XRR), and X-Ray Absorption Spectroscopy (XAS). A LabVIEW Graphical User Interface (GUI) bound with Tango control system consisted of many custom made software modules is utilized as a user-friendly tool for control of the entire end-station hardware components. The present work describes this advanced Tango and LabVIEW software platform that utilizes in an optimal synergistic manner the merits and functionality of these well-established programming and equipment control tools.

  7. Cooperative processing user interfaces for AdaNET

    NASA Technical Reports Server (NTRS)

    Gutzmann, Kurt M.

    1991-01-01

    A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.

  8. Broadening the interface bandwidth in simulation based training

    NASA Technical Reports Server (NTRS)

    Somers, Larry E.

    1989-01-01

    Currently most computer based simulations rely exclusively on computer generated graphics to create the simulation. When training is involved, the method almost exclusively used to display information to the learner is text displayed on the cathode ray tube. MICROEXPERT Systems is concentrating on broadening the communications bandwidth between the computer and user by employing a novel approach to video image storage combined with sound and voice output. An expert system is used to combine and control the presentation of analog video, sound, and voice output with computer based graphics and text. Researchers are currently involved in the development of several graphics based user interfaces for NASA, the U.S. Army, and the U.S. Navy. Here, the focus is on the human factors considerations, software modules, and hardware components being used to develop these interfaces.

  9. Internet Technology in Magnetic Resonance: A Common Gateway Interface Program for the World-Wide Web NMR Spectrometer

    NASA Astrophysics Data System (ADS)

    Buszko, Marian L.; Buszko, Dominik; Wang, Daniel C.

    1998-04-01

    A custom-written Common Gateway Interface (CGI) program for remote control of an NMR spectrometer using a World Wide Web browser has been described. The program, running on a UNIX workstation, uses multiple processes to handle concurrent tasks of interacting with the user and with the spectrometer. The program's parent process communicates with the browser and sends out commands to the spectrometer; the child process is mainly responsible for data acquisition. Communication between the processes is via the shared memory mechanism. The WWW pages that have been developed for the system make use of the frames feature of web browsers. The CGI program provides an intuitive user interface to the NMR spectrometer, making, in effect, a complex system an easy-to-use Web appliance.

  10. CAT 2 - An improved version of Cryogenic Analysis Tools for online and offline monitoring and analysis of large size cryostats

    NASA Astrophysics Data System (ADS)

    Pagliarone, C. E.; Uttaro, S.; Cappelli, L.; Fallone, M.; Kartal, S.

    2017-02-01

    CAT, Cryogenic Analysis Tools is a software package developed using LabVIEW and ROOT environments to analyze the performances of large size cryostats, where many parameters, input, and control variables need to be acquired and studied at the same time. The present paper describes how CAT works and which are the main improvements achieved in the new version: CAT 2. New Graphical User Interfaces have been developed in order to make the use of the full package more user-friendly as well as a process of resource optimization has been carried out. The offline analysis of the full cryostat performances is available both trough ROOT line command interface band also by using the new graphical interfaces.

  11. NREL’s Controllable Grid Interface Saves Time and Resources, Improves Reliability of Renewable Energy Technologies; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The National Renewable Energy Laboratory's (NREL) controllable grid interface (CGI) test system at the National Wind Technology Center (NWTC) is one of two user facilities at NREL capable of testing and analyzing the integration of megawatt-scale renewable energy systems. The CGI specializes in testing of multimegawatt-scale wind and photovoltaic (PV) technologies as well as energy storage devices, transformers, control and protection equipment at medium-voltage levels, allowing the determination of the grid impacts of the tested technology.

  12. Excel2Genie: A Microsoft Excel application to improve the flexibility of the Genie-2000 Spectroscopic software.

    PubMed

    Forgács, Attila; Balkay, László; Trón, Lajos; Raics, Péter

    2014-12-01

    Excel2Genie, a simple and user-friendly Microsoft Excel interface, has been developed to the Genie-2000 Spectroscopic Software of Canberra Industries. This Excel application can directly control Canberra Multichannel Analyzer (MCA), process the acquired data and visualize them. Combination of Genie-2000 with Excel2Genie results in remarkably increased flexibility and a possibility to carry out repetitive data acquisitions even with changing parameters and more sophisticated analysis. The developed software package comprises three worksheets: display parameters and results of data acquisition, data analysis and mathematical operations carried out on the measured gamma spectra. At the same time it also allows control of these processes. Excel2Genie is freely available to assist gamma spectrum measurements and data evaluation by the interested Canberra users. With access to the Visual Basic Application (VBA) source code of this application users are enabled to modify the developed interface according to their intentions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A Full Body Steerable Wind Display for a Locomotion Interface.

    PubMed

    Kulkarni, Sandip D; Fisher, Charles J; Lefler, Price; Desai, Aditya; Chakravarthy, Shanthanu; Pardyjak, Eric R; Minor, Mark A; Hollerbach, John M

    2015-10-01

    This paper presents the Treadport Active Wind Tunnel (TPAWT)-a full-body immersive virtual environment for the Treadport locomotion interface designed for generating wind on a user from any frontal direction at speeds up to 20 kph. The goal is to simulate the experience of realistic wind while walking in an outdoor virtual environment. A recirculating-type wind tunnel was created around the pre-existing Treadport installation by adding a large fan, ducting, and enclosure walls. Two sheets of air in a non-intrusive design flow along the side screens of the back-projection CAVE-like visual display, where they impinge and mix at the front screen to redirect towards the user in a full-body cross-section. By varying the flow conditions of the air sheets, the direction and speed of wind at the user are controlled. Design challenges to fit the wind tunnel in the pre-existing facility, and to manage turbulence to achieve stable and steerable flow, were overcome. The controller performance for wind speed and direction is demonstrated experimentally.

  14. Creating Interactive User Feedback in DGS Using Scripting Interfaces

    ERIC Educational Resources Information Center

    Fest, Andreas

    2010-01-01

    Feedback is an important component of interactive learning software. A conclusion from cognitive learning theory is that good software must give the learner more information about what he did. Following the ideas of constructivist learning theory the user should be in control of both the time and the level of feedback he receives. At the same time…

  15. The application of autostereoscopic display in smart home system based on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  16. Device- and system-independent personal touchless user interface for operating rooms : One personal UI to control all displays in an operating room.

    PubMed

    Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir

    2016-06-01

    In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.

  17. Simulation Control Graphical User Interface Logging Report

    NASA Technical Reports Server (NTRS)

    Hewling, Karl B., Jr.

    2012-01-01

    One of the many tasks of my project was to revise the code of the Simulation Control Graphical User Interface (SIM GUI) to enable logging functionality to a file. I was also tasked with developing a script that directed the startup and initialization flow of the various LCS software components. This makes sure that a software component will not spin up until all the appropriate dependencies have been configured properly. Also I was able to assist hardware modelers in verifying the configuration of models after they have been upgraded to a new software version. I developed some code that analyzes the MDL files to determine if any error were generated due to the upgrade process. Another one of the projects assigned to me was supporting the End-to-End Hardware/Software Daily Tag-up meeting.

  18. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    PubMed

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  19. Design of Instrument Control Software for Solar Vector Magnetograph at Udaipur Solar Observatory

    NASA Astrophysics Data System (ADS)

    Gosain, Sanjay; Venkatakrishnan, P.; Venugopalan, K.

    2004-04-01

    A magnetograph is an instrument which makes measurement of solar magnetic field by measuring Zeeman induced polarization in solar spectral lines. In a typical filter based magnetograph there are three main modules namely, polarimeter, narrow-band spectrometer (filter), and imager(CCD camera). For a successful operation of magnetograph it is essential that these modules work in synchronization with each other. Here, we describe the design of instrument control system implemented for the Solar Vector Magnetograph under development at Udaipur Solar Observatory. The control software is written in Visual Basic and exploits the Component Object Model (COM) components for a fast and flexible application development. The user can interact with the instrument modules through a Graphical User Interface (GUI) and can program the sequence of magnetograph operations. The integration of Interactive Data Language (IDL) ActiveX components in the interface provides a powerful tool for online visualization, analysis and processing of images.

  20. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general.

    PubMed

    Zander, Thorsten O; Kothe, Christian

    2011-04-01

    Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.

  1. Optimization of Residual Stresses in MMC's through Process Parameter Control and the use of Heterogeneous Compensating/Compliant Interfacial Layers. OPTCOMP2 User's Guide

    NASA Technical Reports Server (NTRS)

    Pindera, Marek-Jerzy; Salzar, Robert S.

    1996-01-01

    A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.

  2. Conversion of the CTA, Inc., en route operations concepts database into a formal sentence outline job task taxonomy.

    DOT National Transportation Integrated Search

    1993-01-01

    FAA Air Traffic Control Operations Concepts Volume VI: ARTCC-Host En Route Controllers (1990) developed by CTA, Inc., a technical description of the duties of an En Route air traffic control specialist (ATCS), formatted in User Interface Language, wa...

  3. Semi-autonomous unmanned ground vehicle control system

    NASA Astrophysics Data System (ADS)

    Anderson, Jonathan; Lee, Dah-Jye; Schoenberger, Robert; Wei, Zhaoyi; Archibald, James

    2006-05-01

    Unmanned Ground Vehicles (UGVs) have advantages over people in a number of different applications, ranging from sentry duty, scouting hazardous areas, convoying goods and supplies over long distances, and exploring caves and tunnels. Despite recent advances in electronics, vision, artificial intelligence, and control technologies, fully autonomous UGVs are still far from being a reality. Currently, most UGVs are fielded using tele-operation with a human in the control loop. Using tele-operations, a user controls the UGV from the relative safety and comfort of a control station and sends commands to the UGV remotely. It is difficult for the user to issue higher level commands such as patrol this corridor or move to this position while avoiding obstacles. As computer vision algorithms are implemented in hardware, the UGV can easily become partially autonomous. As Field Programmable Gate Arrays (FPGAs) become larger and more powerful, vision algorithms can run at frame rate. With the rapid development of CMOS imagers for consumer electronics, frame rate can reach as high as 200 frames per second with a small size of the region of interest. This increase in the speed of vision algorithm processing allows the UGVs to become more autonomous, as they are able to recognize and avoid obstacles in their path, track targets, or move to a recognized area. The user is able to focus on giving broad supervisory commands and goals to the UGVs, allowing the user to control multiple UGVs at once while still maintaining the convenience of working from a central base station. In this paper, we will describe a novel control system for the control of semi-autonomous UGVs. This control system combines a user interface similar to a simple tele-operation station along with a control package, including the FPGA and multiple cameras. The control package interfaces with the UGV and provides the necessary control to guide the UGV.

  4. Interactive Design and the Mythical "Intuitive User Interface."

    ERIC Educational Resources Information Center

    Bielenberg, Daniel R.

    1993-01-01

    Discusses the design of graphical user interfaces. Highlights include conceptual models, including user needs, content, and what multimedia can do; and tools for building the users' mental models, including metaphor, natural mappings, prompts, feedback, and user testing. (LRW)

  5. User Interface Design for Dynamic Geometry Software

    ERIC Educational Resources Information Center

    Kortenkamp, Ulrich; Dohrmann, Christian

    2010-01-01

    In this article we describe long-standing user interface issues with Dynamic Geometry Software and common approaches to address them. We describe first prototypes of multi-touch-capable DGS. We also give some hints on the educational benefits of proper user interface design.

  6. The development of an intelligent user interface for NASA's scientific databases

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.

    1986-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI effort is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. This paper presents the design concepts, development approach and evaluation of performance of a prototype Intelligent User Interface Subsystem (IUIS) supporting an operational database.

  7. A parallel coordinates style interface for exploratory volume visualization.

    PubMed

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  8. Proceedings of the 3rd Annual Conference on Aerospace Computational Control, volume 1

    NASA Technical Reports Server (NTRS)

    Bernard, Douglas E. (Editor); Man, Guy K. (Editor)

    1989-01-01

    Conference topics included definition of tool requirements, advanced multibody component representation descriptions, model reduction, parallel computation, real time simulation, control design and analysis software, user interface issues, testing and verification, and applications to spacecraft, robotics, and aircraft.

  9. Earthdata User Interface Patterns: Building Usable Web Interfaces Through a Shared UI Pattern Library

    NASA Astrophysics Data System (ADS)

    Siarto, J.

    2014-12-01

    As more Earth science software tools and services move to the web--the design and usability of those tools become ever more important. A good user interface is becoming expected and users are becoming increasingly intolerant of websites and web applications that work against them. The Earthdata UI Pattern Library attempts to give these scientists and developers the design tools they need to make usable, compelling user interfaces without the associated overhead of using a full design team. Patterns are tested and functional user interface elements targeted specifically at the Earth science community and will include web layouts, buttons, tables, typography, iconography, mapping and visualization/graphing widgets. These UI elements have emerged as the result of extensive user testing, research and software development within the NASA Earthdata team over the past year.

  10. MOO in Your Face: Researching, Designing, and Programming a User-Friendly Interface.

    ERIC Educational Resources Information Center

    Haas, Mark; Gardner, Clinton

    1999-01-01

    Suggests the learning curve of a multi-user, object-oriented domain (MOO) blockades effective use. Discusses use of an IBM/PC-compatible interface that allows developers to modify the interface to provide a sense of presence for the user. Concludes that work in programming a variety of interfaces has led to a more intuitive environment for…

  11. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game

    PubMed Central

    de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549

  12. Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game.

    PubMed

    Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares

    2018-01-01

    This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.

  13. Tracking and data relay satellite system - NASA's new spacecraft data acquisition system

    NASA Technical Reports Server (NTRS)

    Schneider, W. C.; Garman, A. A.

    1979-01-01

    This paper describes NASA's new spacecraft acquisition system provided by the Tracking and Data Relay Satellite System (TDRSS). Four satellites in geostationary orbit and a ground terminal will provide complete tracking, telemetry, and command service for all of NASA's orbital satellites below a 12,000 km altitude. Western Union will lease the system, operate the ground terminal and provide operational satellite control. NASA's network control center will be the focal point for scheduling user services and controlling the interface between TDRSS and the NASA communications network, project control centers, and data processing. TDRSS single access user spacecraft data systems will be designed for time shared data relay support, and reimbursement policy and rate structure for non-NASA users are being developed.

  14. Dynamics modeling for parallel haptic interfaces with force sensing and control.

    PubMed

    Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy

    2013-01-01

    Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation.

  15. Teaching Tool for a Control Systems Laboratory Using a Quadrotor as a Plant in MATLAB

    ERIC Educational Resources Information Center

    Khan, Subhan; Jaffery, Mujtaba Hussain; Hanif, Athar; Asif, Muhammad Rizwan

    2017-01-01

    This paper presents a MATLAB-based application to teach the guidance, navigation, and control concepts of a quadrotor to undergraduate students, using a graphical user interface (GUI) and 3-D animations. The Simulink quadrotor model is controlled by a proportional integral derivative controller and a linear quadratic regulator controller. The GUI…

  16. Towards a holistic assessment of the user experience with hybrid BCIs.

    PubMed

    Lorenz, Romy; Pascual, Javier; Blankertz, Benjamin; Vidaurre, Carmen

    2014-06-01

    In recent years, brain-computer interfaces (BCIs) have become mature enough to immensely benefit from the expertise and tools established in the field of human-computer interaction (HCI). One of the core objectives in HCI research is the design of systems that provide a pleasurable user experience (UX). While the majority of BCI studies exclusively evaluate common efficiency measures such as classification accuracy and speed, single research groups have begun to look at further usability aspects such as ease of use, workload and learnability. However, these evaluation metrics only cover pragmatic aspects of UX while still not considering the hedonic quality of UX. In order to gain a holistic perspective on UX, hedonic quality aspects such as motivation and frustration were also taken into account for our evaluation of three BCI-driven interfaces, which were proposed to be used as a two-stage neuroprosthetic control within the EU project MUNDUS. At the first stage, one of six possible actions was selected and either confirmed or cancelled at the second stage. For the experiment, a solely event-related-potential-based interface (ERP-ERP) and two hybrid solutions were tested that were controlled by ERP and motor imagery (MI)--resulting in the two possible combinations: ERP selection/MI confirmation (ERP-MI) or MI selection/ERP confirmation (MI-ERP). Behavioural, subjective and encephalographic (EEG) data of 12 healthy subjects were collected during an online experiment with the three graphical user interfaces (GUIs). Results showed a significantly greater pragmatic quality (in terms of accuracy, efficiency, workload, use quality and learnability) for the ERP-ERP and ERP-MI GUIs in contrast to the MI-ERP GUI. Consequently, the MI-ERP GUI is least suited for use as a neuroprosthetic control. With respect to the comparison of the ERP-ERP and ERP-MI GUIs, no significant differences in pragmatic and hedonic quality of UX were found. Since throughout better results were obtained for the conventional approach and it was most preferred by the subjects, the ERP-ERP GUI seems more suitable for its deployment in actual end-users. Nevertheless, for individuals with stable MI patterns, the hybrid interface can be provided as an additional option of choice within the MUNDUS framework. Although the paramount goal in BCI research still remains the improvement of classification accuracy and communication speed, it is of significance to note that it is equally important for end-users to keep up their motivation and prevent frustration. By including pragmatic as well as hedonic quality aspects, this study is the first effort to gain a holistic perspective of the UX while interacting with BCI-driven assistive technology aimed at actual end-users. The broad-scale methodology provided valuable insights into the underlying dynamics causing the users' experience to differ across the GUIs. The results will be used to refine a BCI-driven neuroprosthesis and test it with end-users.

  17. Demonstration of the Low-Cost Virtual Collaborative Environment (VCE)

    NASA Technical Reports Server (NTRS)

    Bowers, David; Montes, Leticia; Ramos, Angel; Joyce, Brendan; Lumia, Ron

    1997-01-01

    This paper demonstrates the feasibility of a low-cost approach of remotely controlling equipment. Our demonstration system consists of a PC, the PUMA 560 robot with Barrett hand, and commercially available controller and teleconferencing software. The system provides a graphical user interface which allows a user to program equipment tasks and preview motions i.e., simulate the results. Once satisfied that the actions are both safe and accomplish the task, the remote user sends the data over the Internet to the local site for execution on the real equipment. A video link provides visual feedback to the remote sight. This technology lends itself readily to NASA's upcoming Mars expeditions by providing remote simulation and control of equipment.

  18. A microprocessor-based control system for the Vienna PDS microdensitometer

    NASA Technical Reports Server (NTRS)

    Jenkner, H.; Stoll, M.; Hron, J.

    1984-01-01

    The Motorola Exorset 30 system, based on a Motorola 6809 microprocessor which serves as control processor for the microdensitometer is presented. User communication and instrument control are implemented in this syatem; data transmission to a host computer is provided via standard interfaces. The Vienna PDS system (VIPS) software was developed in BASIC and M6809 assembler. It provides efficient user interaction via function keys and argument input in a menu oriented environment. All parameters can be stored on, and retrieved from, minifloppy disks, making it possible to set up large scanning tasks. Extensive user information includes continuously updated status and coordinate displays, as well as a real time graphic display during scanning.

  19. A hybrid BCI for enhanced control of a telepresence robot.

    PubMed

    Carlson, Tom; Tonin, Luca; Perdikis, Serafeim; Leeb, Robert; del R Millán, José

    2013-01-01

    Motor-disabled end users have successfully driven a telepresence robot in a complex environment using a Brain-Computer Interface (BCI). However, to facilitate the interaction aspect that underpins the notion of telepresence, users must be able to voluntarily and reliably stop the robot at any moment, not just drive from point to point. In this work, we propose to exploit the user's residual muscular activity to provide a fast and reliable control channel, which can start/stop the telepresence robot at any moment. Our preliminary results show that not only does this hybrid approach increase the accuracy, but it also helps to reduce the workload and was the preferred control paradigm of all the participants.

  20. The ACE multi-user web-based Robotic Observatory Control System

    NASA Astrophysics Data System (ADS)

    Mack, P.

    2003-05-01

    We have developed an observatory control system that can be operated in interactive, remote or robotic modes. In interactive and remote mode the observer typically acquires the first object then creates a script through a window interface to complete observations for the rest of the night. The system closes early in the event of bad weather. In robotic mode observations are submitted ahead of time through a web-based interface. We present observations made with a 1.0-m telescope using these methods.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, B.P.

    This report presents a historical perspective of the difficulties associated with user interface design and a review of interface design techniques. Included in the report is an application using rapid-interface-prototyping to the development of CAMP's user interface. 24 refs., 2 tabs.

  2. A Question of Interface Design: How Do Online Service GUIs Measure Up?

    ERIC Educational Resources Information Center

    Head, Alison J.

    1997-01-01

    Describes recent improvements in graphical user interfaces (GUIs) offered by online services. Highlights include design considerations, including computer engineering capabilities and users' abilities; fundamental GUI design principles; user empowerment; visual communication and interaction; and an evaluation of online search interfaces. (LRW)

  3. The GUI OPAC: Approach with Caution.

    ERIC Educational Resources Information Center

    Hildreth, Charles R.

    1995-01-01

    Discusses the graphical user interface (GUI) online public access catalog (OPAC), a user interface that uses images to represent options. Topics include user interface design for information retrieval; designing effective bibliographic displays, including subject headings; two design principles; and what GUIs can bring to OPACs. (LRW)

  4. The Graphical User Interface: Crisis, Danger, and Opportunity.

    ERIC Educational Resources Information Center

    Boyd, L. H.; And Others

    1990-01-01

    This article describes differences between the graphical user interface and traditional character-based interface systems, identifies potential problems posed by graphic computing environments for blind computer users, and describes some programs and strategies that are being developed to provide access to those environments. (Author/JDD)

  5. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (DEC VAX ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  6. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  7. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  8. Integrated Computer Controlled Glow Discharge Tube

    NASA Astrophysics Data System (ADS)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  9. User-Centered Design, Experience, and Usability of an Electronic Consent User Interface to Facilitate Informed Decision-Making in an HIV Clinic.

    PubMed

    Ramos, S Raquel

    2017-11-01

    Health information exchange is the electronic accessibility and transferability of patient medical records across various healthcare settings and providers. In some states, patients have to formally give consent to allow their medical records to be electronically shared. The purpose of this study was to apply a novel user-centered, multistep, multiframework approach to design and test an electronic consent user interface, so patients with HIV can make more informed decisions about electronically sharing their health information. This study consisted of two steps. Step 1 was a cross-sectional, descriptive, qualitative study that used user-centric design interviews to create the user interface. This informed Step 2. Step 2 consisted of a one group posttest to examine perceptions of usefulness, ease of use, preference, and comprehension of a health information exchange electronic consent user interface. More than half of the study population had college experience, but challenges remained with overall comprehension regarding consent. The user interface was not independently successful, suggesting that in addition to an electronic consent user interface, human interaction may also be necessary to address the complexities associated with consenting to electronically share health information. Comprehension is key factor in the ability to make informed decisions.

  10. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  11. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  12. A Co-Adaptive Brain-Computer Interface for End Users with Severe Motor Impairment

    PubMed Central

    Faller, Josef; Scherer, Reinhold; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.

    2014-01-01

    Co-adaptive training paradigms for event-related desynchronization (ERD) based brain-computer interfaces (BCI) have proven effective for healthy users. As of yet, it is not clear whether co-adaptive training paradigms can also benefit users with severe motor impairment. The primary goal of our paper was to evaluate a novel cue-guided, co-adaptive BCI training paradigm with severely impaired volunteers. The co-adaptive BCI supports a non-control state, which is an important step toward intuitive, self-paced control. A secondary aim was to have the same participants operate a specifically designed self-paced BCI training paradigm based on the auto-calibrated classifier. The co-adaptive BCI analyzed the electroencephalogram from three bipolar derivations (C3, Cz, and C4) online, while the 22 end users alternately performed right hand movement imagery (MI), left hand MI and relax with eyes open (non-control state). After less than five minutes, the BCI auto-calibrated and proceeded to provide visual feedback for the MI task that could be classified better against the non-control state. The BCI continued to regularly recalibrate. In every calibration step, the system performed trial-based outlier rejection and trained a linear discriminant analysis classifier based on one auto-selected logarithmic band-power feature. In 24 minutes of training, the co-adaptive BCI worked significantly (p = 0.01) better than chance for 18 of 22 end users. The self-paced BCI training paradigm worked significantly (p = 0.01) better than chance in 11 of 20 end users. The presented co-adaptive BCI complements existing approaches in that it supports a non-control state, requires very little setup time, requires no BCI expert and works online based on only two electrodes. The preliminary results from the self-paced BCI paradigm compare favorably to previous studies and the collected data will allow to further improve self-paced BCI systems for disabled users. PMID:25014055

  13. Beyond intuitive anthropomorphic control: recent achievements using brain computer interface technologies

    NASA Astrophysics Data System (ADS)

    Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco

    2017-05-01

    Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.

  14. Open architecture CMM motion controller

    NASA Astrophysics Data System (ADS)

    Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John

    2001-12-01

    Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.

  15. Classifying BCI signals from novice users with extreme learning machine

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.

    2017-07-01

    Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.

  16. Knob manager (KM) operators guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-10-08

    KM, Knob Manager, is a tool which enables the user to use the SUNDIALS knob box to adjust the settings of the control system. The followings are some features of KM: dynamic knob assignments with the user friendly interface; user-defined gain for individual knob; graphical displays for operating range and status of each process variable is assigned; backup and restore one or multiple process variable; save current settings to a file and recall the settings from that file in future.

  17. Brain-computer interface technology: a review of the first international meeting.

    PubMed

    Wolpaw, J R; Birbaumer, N; Heetderks, W J; McFarland, D J; Peckham, P H; Schalk, G; Donchin, E; Quatrano, L A; Robinson, C J; Vaughan, T M

    2000-06-01

    Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.

  18. Implementing virtual reality interfaces for the geosciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.; Jacobsen, J.; Austin, A.

    1996-06-01

    For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less

  19. LinkWinds: An Approach to Visual Data Analysis

    NASA Technical Reports Server (NTRS)

    Jacobson, Allan S.

    1992-01-01

    The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.

  20. The Body-Machine Interface: A new perspective on an old theme

    PubMed Central

    Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465

  1. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  2. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  3. A Prototype Graphical User Interface for Co-op: A Group Decision Support System.

    DTIC Science & Technology

    1992-03-01

    achieve their potential to communicate. Information-oriented, systematic graphic design is the use of typography , symbols, color, and other static and...apphcuittin by reducig Uber ellurt anid enhuncizig Iliteracti. ’Iliis thesis designs and de% elupht Itrututylle Graphical User Interface iGUl i fui Cu f...ORGANIZATION.... .. .. ............ II. INTERFACE DESIGN PRINCIPLES. .............. 7 A. GRAPHICAL USER INTERFACES.............7 1. Design Principles

  4. Pilot-Vehicle Interface

    DTIC Science & Technology

    1993-11-01

    way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory

  5. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  6. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Mousas, Christos; Anagnostopoulos, Christos-Nikolaos

    2017-06-01

    This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

  7. Electroencephalography(EEG)-based instinctive brain-control of a quadruped locomotion robot.

    PubMed

    Jia, Wenchuan; Huang, Dandan; Luo, Xin; Pu, Huayan; Chen, Xuedong; Bai, Ou

    2012-01-01

    Artificial intelligence and bionic control have been applied in electroencephalography (EEG)-based robot system, to execute complex brain-control task. Nevertheless, due to technical limitations of the EEG decoding, the brain-computer interface (BCI) protocol is often complex, and the mapping between the EEG signal and the practical instructions lack of logic associated, which restrict the user's actual use. This paper presents a strategy that can be used to control a quadruped locomotion robot by user's instinctive action, based on five kinds of movement related neurophysiological signal. In actual use, the user drives or imagines the limbs/wrists action to generate EEG signal to adjust the real movement of the robot according to his/her own motor reflex of the robot locomotion. This method is easy for real use, as the user generates the brain-control signal through the instinctive reaction. By adopting the behavioral control of learning and evolution based on the proposed strategy, complex movement task may be realized by instinctive brain-control.

  8. Engineering platform and experimental protocol for design and evaluation of a neurally-controlled powered transfemoral prosthesis.

    PubMed

    Zhang, Fan; Liu, Ming; Harper, Stephen; Lee, Michael; Huang, He

    2014-07-22

    To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.

  9. A self-paced brain-computer interface for controlling a robot simulator: an online event labelling paradigm and an extended Kalman filter based algorithm for online training.

    PubMed

    Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J

    2009-03-01

    Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.

  10. A Matlab-Based Graphical User Interface for Simulation and Control Design of a Hydrogen Mixer

    NASA Technical Reports Server (NTRS)

    Richter, Hanz; Figueroa, Fernando

    2003-01-01

    A Graphical User Interface (GUI) that facilitates prediction and control design tasks for a propellant mixer is described. The Hydrogen mixer is used in rocket test stand operations at the NASA John C. Stennis Space Center. The mixer injects gaseous hydrogen (GH2) into a stream of liquid hydrogen (LH2) to obtain a combined flow with desired thermodynamic properties. The flows of GH2 and LH2 into the mixer are regulated by two control valves, and a third control valve is installed at the exit of the mixer to regulate the combined flow. The three valves may be simultaneously operated in order to achieve any desired combination of total flow, exit temperature and mixer pressure within the range of operation. The mixer, thus, constitutes a three-input, three-output system. A mathematical model of the mixer has been obtained and validated with experimental data. The GUI presented here uses the model to predict mixer response under diverse conditions.

  11. Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting

    NASA Astrophysics Data System (ADS)

    Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.

    2017-12-01

    Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.

  12. Applying Cognitive Psychology to User Interfaces

    NASA Astrophysics Data System (ADS)

    Durrani, Sabeen; Durrani, Qaiser S.

    This paper explores some key aspects of cognitive psychology that may be mapped onto user interfaces. Major focus in existing user interface guidelines is on consistency, simplicity, feedback, system messages, display issues, navigation, colors, graphics, visibility and error prevention [8-10]. These guidelines are effective indesigning user interfaces. However, these guidelines do not handle the issues that may arise due to the innate structure of human brain and human limitations. For example, where to place graphics on the screen so that user can easily process them and what kind of background should be given on the screen according to the limitation of human motor system. In this paper we have collected some available guidelines from the area of cognitive psychology [1, 5, 7]. In addition, we have extracted few guidelines from theories and studies of cognitive psychology [3, 11] which may be mapped to user interfaces.

  13. A tailored 200 parameter VME based data acquisition system for IBA at the Lund Ion Beam Analysis Facility - Hardware and software

    NASA Astrophysics Data System (ADS)

    Elfman, Mikael; Ros, Linus; Kristiansson, Per; Nilsson, E. J. Charlotta; Pallon, Jan

    2016-03-01

    With the recent advances towards modern Ion Beam Analysis (IBA), going from one- or few-parameter detector systems to multi-parameter systems, it has been necessary to expand and replace the more than twenty years old CAMAC based system. A new VME multi-parameter (presently up to 200 channels) data acquisition and control system has been developed and implemented at the Lund Ion Beam Analysis Facility (LIBAF). The system is based on the VX-511 Single Board Computer (SBC), acting as master with arbiter functionality and consists of standard VME modules like Analog to Digital Converters (ADC's), Charge to Digital Converters (QDC's), Time to Digital Converters (TDC's), scaler's, IO-cards, high voltage and waveform units. The modules have been specially selected to support all of the present detector systems in the laboratory, with the option of future expansion. Typically, the detector systems consist of silicon strip detectors, silicon drift detectors and scintillator detectors, for detection of charged particles, X-rays and γ-rays. The data flow of the raw data buffers out from the VME bus to the final storage place on a 16 terabyte network attached storage disc (NAS-disc) is described. The acquisition process, remotely controlled over one of the SBCs ethernet channels, is also discussed. The user interface is written in the Kmax software package, and is used to control the acquisition process as well as for advanced online and offline data analysis through a user-friendly graphical user interface (GUI). In this work the system implementation, layout and performance are presented. The user interface and possibilities for advanced offline analysis are also discussed and illustrated.

  14. A case study on better iconographic design in electronic medical records' user interface.

    PubMed

    Tasa, Umut Burcu; Ozcan, Oguzhan; Yantac, Asim Evren; Unluer, Ayca

    2008-06-01

    It is a known fact that there is a conflict between what users expect and what user interface designers create in the field of medical informatics along with other fields of interface design. The objective of the study is to suggest, from the 'design art' perspective, a method for improving the usability of an electronic medical record (EMR) interface. The suggestion is based on the hypothesis that the user interface of an EMR should be iconographic. The proposed three-step method consists of a questionnaire survey on how hospital users perceive concepts/terms that are going to be used in the EMR user interface. Then icons associated with the terms are designed by a designer, following a guideline which is prepared according to the results of the first questionnaire. Finally the icons are asked back to the target group for proof. A case study was conducted with 64 medical staff and 30 professional designers for the first questionnaire, and with 30 medical staff for the second. In the second questionnaire 7.53 icons out of 10 were matched correctly with a standard deviation of 0.98. Also, all icons except three were matched correctly in at least 83.3% of the forms. The proposed new method differs from the majority of previous studies which are based on user requirements by leaning on user experiments instead. The study demonstrated that the user interface of EMRs should be designed according to a guideline that results from a survey on users' experiences on metaphoric perception of the terms.

  15. Real-Life Migrants on the MUVE: Stories of Virtual Transitions

    ERIC Educational Resources Information Center

    Perkins, Ross A.; Arreguin, Cathy

    2007-01-01

    The communication and collaborative interface known as a multi-user virtual environment (MUVE), has existed since as early as the late 1970s. MUVEs refer to programs that have an animated character ("avatar") controlled by a user within a wider environment that can be explored--or built--at will. Second Life, a MUVE created by San Francisco-based…

  16. An autonomous fault detection, isolation, and recovery system for a 20-kHz electric power distribution test bed

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.

  17. The Distributed Common Ground System-Army User Interface

    DTIC Science & Technology

    2015-06-12

    its perceived lack of effectiveness. Popular opinion of the DCGS-A user interface within the military is it is unfriendly to use and not intuitive...from members of the United States Congress due to its perceived lack of effectiveness. Popular opinion of the DCGS-A user interface within the

  18. Learning Analytics for Natural User Interfaces

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Shum, Simon Buckingham; Schneider, Bertrand; Charleer, Sven; Klerkx, Joris; Duval, Erik

    2017-01-01

    The continuous advancement of natural user interfaces (NUIs) allows for the development\tof novel and creative ways to support collocated collaborative work in a wide range of areas, including teaching and learning. The use of NUIs, such as those based on interactive multi-touch surfaces and tangible user interfaces (TUIs), can offer unique…

  19. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  20. Development of a Mobile User Interface for Image-based Dietary Assessment.

    PubMed

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  1. Living with an autonomous spatiotemporal home heating system: Exploration of the user experiences (UX) through a longitudinal technology intervention-based mixed-methods approach.

    PubMed

    Kruusimagi, Martin; Sharples, Sarah; Robinson, Darren

    2017-11-01

    Rising energy demands place pressure on domestic energy consumption, but savings can be delivered through home automation and engaging users with their heating and energy behaviours. The aim of this paper is to explore user experiences (UX) of living with an automated heating system regarding experiences of control, understanding of the system, emerging thermal behaviours, and interactions with the system as this area is not sufficiently researched in the existing homes setting through extended deployment. We present a longitudinal deployment of a quasi-autonomous spatiotemporal home heating system in three homes. Users were provided with a smartphone control application linked to a self-learning heating algorithm. Rich qualitative and quantitative data presented here enabled a holistic exploration of UX. The paper's contribution focuses on highlighting key aspects of UX living with an automated heating systems including (i) adoption of the control interface into the social context, (ii) how users' vigilance in maintaining preferred conditions prevailed as a better indicator of system over-ride than gross deviation from thermal comfort, (iii) limited but motivated proactivity in system-initiated communications as best strategy for soliciting user feedback when inference fails, and (iv) two main motivations for interacting with the interface - managing irregularities when absent from the house and maintaining immediate comfort, latter compromising of a checking behaviour that can transit to a system state alteration behaviour depending on mismatches. We conclude by highlighting the complex socio-technical context in which thermal decisions are made in a situated action manner, and by calling for a more holistic, UX-focused approach in the design of automated home systems involving user experiences. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. I-SAVE: AN INTERACTIVE REAL-TIME MONITOR AND CONTROLLER TO INFLUENCE ENERGY CONSERVATION BEHAVIOR BY IMPULSE SAVING

    EPA Science Inventory

    Simulation-based model to explore the benefits of monitoring and control to energy saving opportunities in residential homes; an adaptive algorithm to predict the type of electrical loads; a prototype user friendly interface monitoring and control device to save energy; a p...

  3. Internet Technology in Magnetic Resonance: A Common Gateway Interface Program for the World-Wide Web NMR Spectrometer

    PubMed

    Buszko; Buszko; Wang

    1998-04-01

    A custom-written Common Gateway Interface (CGI) program for remote control of an NMR spectrometer using a World Wide Web browser has been described. The program, running on a UNIX workstation, uses multiple processes to handle concurrent tasks of interacting with the user and with the spectrometer. The program's parent process communicates with the browser and sends out commands to the spectrometer; the child process is mainly responsible for data acquisition. Communication between the processes is via the shared memory mechanism. The WWW pages that have been developed for the system make use of the frames feature of web browsers. The CGI program provides an intuitive user interface to the NMR spectrometer, making, in effect, a complex system an easy-to-use Web appliance. Copyright 1998 Academic Press.

  4. User interfaces in space science instrumentation

    NASA Astrophysics Data System (ADS)

    McCalden, Alec John

    This thesis examines user interaction with instrumentation in the specific context of space science. It gathers together existing practice in machine interfaces with a look at potential future usage and recommends a new approach to space science projects with the intention of maximising their science return. It first takes a historical perspective on user interfaces and ways of defining and measuring the science return of a space instrument. Choices of research methodology are considered. Implementation details such as the concepts of usability, mental models, affordance and presentation of information are described, and examples of existing interfaces in space science are given. A set of parameters for use in analysing and synthesizing a user interface is derived by using a set of case studies of diverse failures and from previous work. A general space science user analysis is made by looking at typical practice, and an interview plus persona technique is used to group users with interface designs. An examination is made of designs in the field of astronomical instrumentation interfaces, showing the evolution of current concepts and including ideas capable of sustaining progress in the future. The parameters developed earlier are then tested against several established interfaces in the space science context to give a degree of confidence in their use. The concept of a simulator that is used to guide the development of an instrument over the whole lifecycle is described, and the idea is proposed that better instrumentation would result from more efficient use of the resources available. The previous ideas in this thesis are then brought together to describe a proposed new approach to a typical development programme, with an emphasis on user interaction. The conclusion shows that there is significant room for improvement in the science return from space instrumentation by attention to the user interface.

  5. DiAs User Interface: A Patient-Centric Interface for Mobile Artificial Pancreas Systems

    PubMed Central

    Keith-Hynes, Patrick; Guerlain, Stephanie; Mize, Benton; Hughes-Karvetski, Colleen; Khan, Momin; McElwee-Malloy, Molly; Kovatchev, Boris P.

    2013-01-01

    Background Recent in-hospital studies of artificial pancreas (AP) systems have shown promising results in improving glycemic control in patients with type 1 diabetes mellitus. The next logical step in AP development is to conduct transitional outpatient clinical trials with a mobile system that is controlled by the patient. In this article, we present the user interface (UI) of the Diabetes Assistant (DiAs), an experimental smartphone-based mobile AP system, and describe the reactions of a round of focus groups to the UI. This work is an initial inquiry involving a relatively small number of potential users, many of whom had never seen an AP system before, and the results should be understood in that light. Methods We began by considering how the UI of an AP system could be designed to make use of the familiar touch-based graphical UI of a consumer smartphone. After developing a working prototype UI, we enlisted a human factors specialist to perform a heuristic expert analysis. Next we conducted a formative evaluation of the UI through a series of three focus groups with N = 13 potential end users as participants. The UI was modified based upon the results of these studies, and the resulting DiAs system was used in transitional outpatient AP studies of adults in the United States and Europe. Results The DiAs UI was modified based on focus group feedback from potential users. The DiAs was subsequently used in JDRF- and AP@Home-sponsored transitional outpatient AP studies in the United States and Europe by 40 subjects for 2400 h with no adverse events. Conclusions Adult patients with type 1 diabetes mellitus are able to control an AP system successfully using a patient-centric UI on a commercial smartphone in a transitional outpatient environment. PMID:24351168

  6. Combining fuzzy mathematics with fuzzy logic to solve business management problems

    NASA Astrophysics Data System (ADS)

    Vrba, Joseph A.

    1993-12-01

    Fuzzy logic technology has been applied to control problems with great success. Because of this, many observers fell that fuzzy logic is applicable only in the control arena. However, business management problems almost never deal with crisp values. Fuzzy systems technology--a combination of fuzzy logic, fuzzy mathematics and a graphical user interface--is a natural fit for developing software to assist in typical business activities such as planning, modeling and estimating. This presentation discusses how fuzzy logic systems can be extended through the application of fuzzy mathematics and the use of a graphical user interface to make the information contained in fuzzy numbers accessible to business managers. As demonstrated through examples from actual deployed systems, this fuzzy systems technology has been employed successfully to provide solutions to the complex real-world problems found in the business environment.

  7. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  8. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  9. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  10. Development and usability testing of a web-based cancer symptom and quality-of-life support intervention.

    PubMed

    Wolpin, S E; Halpenny, B; Whitman, G; McReynolds, J; Stewart, M; Lober, W B; Berry, D L

    2015-03-01

    The feasibility and acceptability of computerized screening and patient-reported outcome measures have been demonstrated in the literature. However, patient-centered management of health information entails two challenges: gathering and presenting data using "patient-tailored" methods and supporting "patient-control" of health information. The design and development of many symptom and quality-of-life information systems have not included opportunities for systematically collecting and analyzing user input. As part of a larger clinical trial, the Electronic Self-Report Assessment for Cancer-II project, participatory design approaches were used to build and test new features and interfaces for patient/caregiver users. The research questions centered on patient/caregiver preferences with regard to the following: (a) content, (b) user interface needs, (c) patient-oriented summary, and (d) patient-controlled sharing of information with family, caregivers, and clinicians. Mixed methods were used with an emphasis on qualitative approaches; focus groups and individual usability tests were the primary research methods. Focus group data were content analyzed, while individual usability sessions were assessed with both qualitative and quantitative methods. We identified 12 key patient/caregiver preferences through focus groups with 6 participants. We implemented seven of these preferences during the iterative design process. We deferred development for some of the preferences due to resource constraints. During individual usability testing (n = 8), we were able to identify 65 usability issues ranging from minor user confusion to critical errors that blocked task completion. The participatory development model that we used led to features and design revisions that were patient centered. We are currently evaluating new approaches for the application interface and for future research pathways. We encourage other researchers to adopt user-centered design approaches when building patient-centered technologies. © The Author(s) 2014.

  11. Orbiter middeck/payload standard interfaces control document

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The interfaces which shall be provided by the baseline shuttle mid-deck for payload use within the mid-deck area are defined, as well as all constraints which shall be observed by all the users of the defined interfaces. Commonality was established with respect to analytical approaches, analytical models, technical data and definitions for integrated analyses by all the interfacing parties. Any payload interfaces that are out of scope with the standard interfaces defined shall be defined in a Payload Unique Interface Control Document (ICD) for a given payload. Each Payload Unique ICD will have comparable paragraphs to this ICD and will have a corresponding notation of A, for applicable; N/A, for not applicable; N, for note added for explanation; and E, for exception. On any flight, the STS reserves the right to assign locations to both payloads mounted on an adapter plate(s) and payloads stored within standard lockers. Specific locations requests and/or requirements exceeding standard mid-deck payload requirements may result in a reduction in manifesting opportunities.

  12. Intelligent Context-Aware and Adaptive Interface for Mobile LBS

    PubMed Central

    Liu, Yanhong

    2015-01-01

    Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077

  13. Flexible Parsing.

    DTIC Science & Technology

    1986-06-30

    Machine Studies .. 14. Minton, S. N., Hayes, P. J., and Fain, J. E. Controlling Search in Flexible Parsing. Proc. Ninth Int. Jt. Conf. on Artificial...interaction through the COUSIN command interface", International Journal of Man- Machine Studies , Vol. 19, No. 3, September 1983, pp. 285-305. 8...in a gracefully interacting user interface," "Dynamic strategy selection in flexible parsing," and "Parsing spoken language: a semantic case frame

  14. Reasoning about Users' Actions in a Graphical User Interface.

    ERIC Educational Resources Information Center

    Virvou, Maria; Kabassi, Katerina

    2002-01-01

    Describes a graphical user interface called IFM (Intelligent File Manipulator) that provides intelligent help to users. Explains two underlying reasoning mechanisms, one an adaptation of human plausible reasoning and one that performs goal recognition based on the effects of users' commands; and presents results of an empirical study that…

  15. Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1997-01-01

    Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  16. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces.

    PubMed

    Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.

  17. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  18. CONSOLE: A CAD tandem for optimization-based design interacting with user-supplied simulators

    NASA Technical Reports Server (NTRS)

    Fan, Michael K. H.; Wang, Li-Sheng; Koninckx, Jan; Tits, Andre L.

    1989-01-01

    CONSOLE employs a recently developed design methodology (International Journal of Control 43:1693-1721) which provides the designer with a congenial environment to express his problem as a multiple ojective constrained optimization problem and allows him to refine his characterization of optimality when a suboptimal design is approached. To this end, in CONSOLE, the designed formulates the design problem using a high-level language and performs design task and explores tradeoff through a few short and clearly defined commands. The range of problems that can be solved efficiently using a CAD tools depends very much on the ability of this tool to be interfaced with user-supplied simulators. For instance, when designing a control system one makes use of the characteristics of the plant, and therefore, a model of the plant under study has to be made available to the CAD tool. CONSOLE allows for an easy interfacing of almost any simulator the user has available. To date CONSOLE has already been used successfully in many applications, including the design of controllers for a flexible arm and for a robotic manipulator and the solution of a parameter selection problem for a neural network.

  19. Final Report: MaRSPlus Sensor System Electrical Cable Management and Distributed Motor Control Computer Interface

    NASA Technical Reports Server (NTRS)

    Reil, Robin

    2011-01-01

    The success of JPL's Next Generation Imaging Spectrometer (NGIS) in Earth remote sensing has inspired a follow-on instrument project, the MaRSPlus Sensor System (MSS). One of JPL's responsibilities in the MSS project involves updating the documentation from the previous JPL airborne imagers to provide all the information necessary for an outside customer to operate the instrument independently. As part of this documentation update, I created detailed electrical cabling diagrams to provide JPL technicians with clear and concise build instructions and a database to track the status of cables from order to build to delivery. Simultaneously, a distributed motor control system is being developed for potential use on the proposed 2018 Mars rover mission. This system would significantly reduce the mass necessary for rover motor control, making more mass space available to other important spacecraft systems. The current stage of the project consists of a desktop computer talking to a single "cold box" unit containing the electronics to drive a motor. In order to test the electronics, I developed a graphical user interface (GUI) using MATLAB to allow a user to send simple commands to the cold box and display the responses received in a user-friendly format.

  20. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces

    PubMed Central

    Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776

Top