Sample records for user interface devices

  1. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  2. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  3. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  4. Development of a simulated smart pump interface.

    PubMed

    Elias, Beth L; Moss, Jacqueline A; Shih, Alan; Dillavou, Marcus

    2014-01-01

    Medical device user interfaces are increasingly complex, resulting in a need for evaluation in clinicallyaccurate settings. Simulation of these interfaces can allow for evaluation, training, and use for research without the risk of harming patients and with a significant cost reduction over using the actual medical devices. This pilot project was phase 1 of a study to define and evaluate a methodology for development of simulated medical device interface technology to be used for education, device development, and research. Digital video and audio recordings of interface interactions were analyzed to develop a model of a smart intravenous medication infusion pump user interface. This model was used to program a high-fidelity simulated smart intravenous medication infusion pump user interface on an inexpensive netbook platform.

  5. 78 FR 77209 - Accessibility of User Interfaces, and Video Programming Guides and Menus

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... user interfaces on digital apparatus and video programming guides and menus on navigation devices for... apparatus and navigation devices used to view video programming. The rules we adopt here will effectuate...--that is, devices and other equipment used by consumers to access multichannel video programming and...

  6. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers' Touch-Interface User Experiences

    ERIC Educational Resources Information Center

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users' shopping behavior. In this research, I examine the…

  7. Development of a Mobile User Interface for Image-based Dietary Assessment.

    PubMed

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  8. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  9. Systems, methods, and products for graphically illustrating and controlling a droplet actuator

    NASA Technical Reports Server (NTRS)

    Brafford, Keith R. (Inventor); Pamula, Vamsee K. (Inventor); Paik, Philip Y. (Inventor); Pollack, Michael G. (Inventor); Sturmer, Ryan A. (Inventor); Smith, Gregory F. (Inventor)

    2010-01-01

    Systems for controlling a droplet microactuator are provided. According to one embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, and a display device displaying a user interface electronically coupled to the controller, wherein the system is programmed and configured to permit a user to effect a droplet manipulation by interacting with the user interface. According to another embodiment, a system is provided and includes a processor, a display device electronically coupled to the processor, and software loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller and programmed to display an interactive map of a droplet microactuator. According to yet another embodiment, a system is provided and includes a controller, a droplet microactuator electronically coupled to the controller, a display device displaying a user interface electronically coupled to the controller, and software for executing a protocol loaded and/or stored in a storage device electronically coupled to the controller, a memory device electronically coupled to the controller, and/or the controller.

  10. Development of a Mobile User Interface for Image-based Dietary Assessment

    PubMed Central

    Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.

    2011-01-01

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755

  11. Railroad track inspection interface demonstration : final report.

    DOT National Transportation Integrated Search

    2016-01-01

    This project developed a track data user interface utilizing the Google Glass optical display device. The interface allows the user : to recall data stored remotely and view the data on the Google Glass. The technical effort required developing a com...

  12. Pilot-Vehicle Interface

    DTIC Science & Technology

    1993-11-01

    way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory

  13. 14 CFR § 1215.102 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., and the necessary TDRSS operational areas, interface devices, and NASA communication circuits that... interface. (c) Bit stream. The electronic signals acquired by TDRSS from the user craft or the user...

  14. A Framework for the Development of Context-Adaptable User Interfaces for Ubiquitous Computing Systems.

    PubMed

    Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A; Duro, Richard

    2016-07-07

    This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location.

  15. Embedded Process Modeling, Analogy-Based Option Generation and Analytical Graphic Interaction for Enhanced User-Computer Interaction: An Interactive Storyboard of Next Generation User-Computer Interface Technology. Phase 1

    DTIC Science & Technology

    1988-03-01

    structure of the interface is a mapping from the physical world [for example, the use of icons, which S have inherent meaning to users but represent...design alternatives. Mechanisms for linking the user to the computer include physical devices (keyboards), actions taken with the devices (keystrokes...VALUATION AIDES TEMLATEI IITCOM1I LATOR IACTICAL KNOWLEDGE ACGIUISITION MICNnII t 1 Fig. 9. INTACVAL. * OtJiCTs ARE PHYSICAL ENTITIES OR CONCEPTUAL EN

  16. Recommending personally interested contents by text mining, filtering, and interfaces

    DOEpatents

    Xu, Songhua

    2015-10-27

    A personalized content recommendation system includes a client interface device configured to monitor a user's information data stream. A collaborative filter remote from the client interface device generates automated predictions about the interests of the user. A database server stores personal behavioral profiles and user's preferences based on a plurality of monitored past behaviors and an output of the collaborative user personal interest inference engine. A programmed personal content recommendation server filters items in an incoming information stream with the personal behavioral profile and identifies only those items of the incoming information stream that substantially matches the personal behavioral profile. The identified personally relevant content is then recommended to the user following some priority that may consider the similarity between the personal interest matches, the context of the user information consumption behaviors that may be shown by the user's content consumption mode.

  17. Fast and Efficient Radiological Interventions via a Graphical User Interface Commanded Magnetic Resonance Compatible Robotic Device

    PubMed Central

    Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos

    2011-01-01

    The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067

  18. Human Factors Engineering and testing for a wearable, long duration ultrasound system self-applied by an end user.

    PubMed

    Taggart, Rebecca; Langer, Matthew D; Lewis, George K

    2014-01-01

    One of the major challenges in the design of a new class of medical device is ensuring that the device will have a safe and effective user interface for the intended users. Human Factors Engineering addresses these concerns through direct study of how a user interacts with newly designed devices with unique features. In this study, a novel long duration, low intensity therapeutic ultrasound device is tested by 20 end users representative of the intended user population. Over 90% of users were able to operate the device successfully. The therapeutic ultrasound device was found to be reasonably safe and effective for the intended users, uses, and use environments.

  19. A novel asynchronous access method with binary interfaces

    PubMed Central

    2008-01-01

    Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797

  20. Circling motion and screen edges as an alternative input method for on-screen target manipulation.

    PubMed

    Ka, Hyun W; Simpson, Richard C

    2017-04-01

    To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.

  1. The effect of visualizing the flow of multimedia content among and inside devices.

    PubMed

    Lee, Dong-Seok

    2009-05-01

    This study introduces a user interface, referred to as the flow interface, which provides a graphical representation of the movement of content among and inside audio/video devices. The proposed interface provides a different frame of reference with content-oriented visualization of the generation, manipulation, storage, and display of content as well as input and output. The flow interface was applied to a VCR/DVD recorder combo, one of the most complicated consumer products. A between-group experiment was performed to determine whether the flow interface helps users to perform various tasks and to examine the learning effect of the flow interface, particularly in regard to hooking up and recording tasks. The results showed that participants with access to the flow interface performed better in terms of success rate and elapsed time. In addition, the participants indicated that they could easily understand the flow interface. The potential of the flow interface for application to other audio video devices, and design issues requiring further consideration, are discussed.

  2. Experiment on a novel user input for computer interface utilizing tongue input for the severely disabled.

    PubMed

    Kencana, Andy Prima; Heng, John

    2008-11-01

    This paper introduces a novel passive tongue control and tracking device. The device is intended to be used by the severely disabled or quadriplegic person. The main focus of this device when compared to the other existing tongue tracking devices is that the sensor employed is passive which means it requires no powered electrical sensor to be inserted into the user's mouth and hence no trailing wires. This haptic interface device employs the use of inductive sensors to track the position of the user's tongue. The device is able perform two main PC functions that of the keyboard and mouse function. The results show that this device allows the severely disabled person to have some control in his environment, such as to turn on and off or control daily electrical devices or appliances; or to be used as a viable PC Human Computer Interface (HCI) by tongue control. The operating principle and set-up of such a novel passive tongue HCI has been established with successful laboratory trials and experiments. Further clinical trials will be required to test out the device on disabled persons before it is ready for future commercial development.

  3. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study.

    PubMed

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-10-25

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the "Florida Secundaria" high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable).

  4. A serial digital data communications device. [for real time flight simulation

    NASA Technical Reports Server (NTRS)

    Fetter, J. L.

    1977-01-01

    A general purpose computer peripheral device which is used to provide a full-duplex, serial, digital data transmission link between a Xerox Sigma computer and a wide variety of external equipment, including computers, terminals, and special purpose devices is reported. The interface has an extensive set of user defined options to assist the user in establishing the necessary data links. This report describes those options and other features of the serial communications interface and its performance by discussing its application to a particular problem.

  5. A Framework for the Development of Context-Adaptable User Interfaces for Ubiquitous Computing Systems

    PubMed Central

    Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A.; Duro, Richard

    2016-01-01

    This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location. PMID:27399711

  6. Accessibility of Mobile Devices for Visually Impaired Users: An Evaluation of the Screen-Reader VoiceOver.

    PubMed

    Smaradottir, Berglind; Håland, Jarle; Martinez, Santiago

    2017-01-01

    A mobile device's touchscreen allows users to use a choreography of hand gestures to interact with the user interface. A screen reader on a mobile device is designed to support the interaction of visually disabled users while using gestures. This paper presents an evaluation of VoiceOver, a screen reader in Apple Inc. products. The evaluation was a part of the research project "Visually impaired users touching the screen - a user evaluation of assistive technology".

  7. Haptic-STM: a human-in-the-loop interface to a scanning tunneling microscope.

    PubMed

    Perdigão, Luís M A; Saywell, Alex

    2011-07-01

    The operation of a haptic device interfaced with a scanning tunneling microscope (STM) is presented here. The user moves the STM tip in three dimensions by means of a stylus attached to the haptic instrument. The tunneling current measured by the STM is converted to a vertical force, applied to the stylus and felt by the user, with the user being incorporated into the feedback loop that controls the tip-surface distance. A haptic-STM interface of this nature allows the user to feel atomic features on the surface and facilitates the tactile manipulation of the adsorbate/substrate system. The operation of this device is demonstrated via the room temperature STM imaging of C(60) molecules adsorbed on an Au(111) surface in ultra-high vacuum.

  8. Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study

    PubMed Central

    Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier

    2016-01-01

    This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the “Florida Secundaria” high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable). PMID:27792132

  9. Connected cane: Tactile button input for controlling gestures of iOS voiceover embedded in a white cane.

    PubMed

    Batterman, Jared M; Martin, Vincent F; Yeung, Derek; Walker, Bruce N

    2018-01-01

    Accessibility of assistive consumer devices is an emerging research area with potential to benefit both users with and without visual impairments. In this article, we discuss the research and evaluation of using a tactile button interface to control an iOS device's native VoiceOver Gesture navigations (Apple Accessibility, 2014). This research effort identified potential safety and accessibility issues for users trying to interact and control their touchscreen mobile iOS devices while traveling independently. Furthermore, this article discusses the participatory design process in creating a solution that aims to solve issues in utilizing a tactile button interface in a novel device. The overall goal of this study is to enable visually impaired white cane users to access their mobile iOS device's capabilities navigation aids more safely and efficiently on the go.

  10. Acquisition of ICU data: concepts and demands.

    PubMed

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  12. Use of force feedback to enhance graphical user interfaces

    NASA Astrophysics Data System (ADS)

    Rosenberg, Louis B.; Brave, Scott

    1996-04-01

    This project focuses on the use of force feedback sensations to enhance user interaction with standard graphical user interface paradigms. While typical joystick and mouse devices are input-only, force feedback controllers allow physical sensations to be reflected to a user. Tasks that require users to position a cursor on a given target can be enhanced by applying physical forces to the user that aid in targeting. For example, an attractive force field implemented at the location of a graphical icon can greatly facilitate target acquisition and selection of the icon. It has been shown that force feedback can enhance a users ability to perform basic functions within graphical user interfaces.

  13. Improving 3D Character Posing with a Gestural Interface.

    PubMed

    Kyto, Mikko; Dhinakaran, Krupakar; Martikainen, Aki; Hamalainen, Perttu

    2017-01-01

    The most time-consuming part of character animation is 3D character posing. Posing using a mouse is a slow and tedious task that involves sequences of selecting on-screen control handles and manipulating the handles to adjust character parameters, such as joint rotations and end effector positions. Thus, various 3D user interfaces have been proposed to make animating easier, but they typically provide less accuracy. The proposed interface combines a mouse with the Leap Motion device to provide 3D input. A usability study showed that users preferred the Leap Motion over a mouse as a 3D gestural input device. The Leap Motion drastically decreased the number of required operations and the task completion time, especially for novice users.

  14. Urine collection apparatus. [feminine hygiene

    NASA Technical Reports Server (NTRS)

    Michaud, R. B. (Inventor)

    1981-01-01

    A urine collection device for females comprises an interface body with an interface surface for engagement with the user's body. The interface body comprises a forward portion defining a urine-receiving bore which has an inlet in the interface surface adapted to be disposed in surrounding relation to the urethral opening of the user. The interface body also has a rear portion integrally adjoining the forward portion and a non-invasive vaginal seal on the interface surface for sealing the vagina of the user from communication with the urine-receiving bore. An absorbent pad is removably supported on the interface body and extends laterally therefrom. A garment for supporting the urine collection is also disclosed.

  15. VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.

    PubMed

    Langner, Ricardo; Horak, Tom; Dachselt, Raimund

    2017-08-29

    We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.

  16. Haptic force-feedback devices for the office computer: performance and musculoskeletal loading issues.

    PubMed

    Dennerlein, J T; Yang, M C

    2001-01-01

    Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.

  17. Advanced display object selection methods for enhancing user-computer productivity

    NASA Technical Reports Server (NTRS)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  18. Machine learning techniques for energy optimization in mobile embedded systems

    NASA Astrophysics Data System (ADS)

    Donohoo, Brad Kyoshi

    Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.

  19. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  20. The Characteristics of User-Generated Passwords

    DTIC Science & Technology

    1990-03-01

    electronic keys), user interface tokens (pocket devices that can generate one-time passwords) and fixed password devices ( plastic cards that contain...APPENDIX B-7 DIFFREM DIFFICULTY REMfEIBERING by PASSCHAR PASSORD CARACTERISTICS PASSCHAR Pate I of 1 Count 1 Row Pet IALPHAVET NUMERIC ALPHANUM ASCII Cal Pet

  1. Integrating Conjoint Analysis with TOPSIS Algorithm to the Visual Effect of Icon Design Based on Multiple Users' Image Perceptions

    ERIC Educational Resources Information Center

    Tung, Ting-Chun; Chen, Hung-Yuan

    2017-01-01

    With the advance of mobile computing and wireless technology, a user's intent to interact with the interface of a mobile device is motivated not only by its intuitional operation, but also by the emotional perception induced by its aesthetic appeal. A graphical interface employing icons with suitable visual effect based on the users' emotional…

  2. Wearable wireless User Interface Cursor-Controller (UIC-C).

    PubMed

    Marjanovic, Nicholas; Kerr, Kevin; Aranda, Ricardo; Hickey, Richard; Esmailbeigi, Hananeh

    2017-07-01

    Controlling a computer or a smartphone's cursor allows the user to access a world full of information. For millions of people with limited upper extremities motor function, controlling the cursor becomes profoundly difficult. Our team has developed the User Interface Cursor-Controller (UIC-C) to assist the impaired individuals in regaining control over the cursor. The UIC-C is a hands-free device that utilizes the tongue muscle to control the cursor movements. The entire device is housed inside a subject specific retainer. The user maneuvers the cursor by manipulating a joystick imbedded inside the retainer via their tongue. The joystick movement commands are sent to an electronic device via a Bluetooth connection. The device is readily recognizable as a cursor controller by any Bluetooth enabled electronic device. The device testing results have shown that the time it takes the user to control the cursor accurately via the UIC-C is about three times longer than a standard computer mouse controlled via the hand. The device does not require any permanent modifications to the body; therefore, it could be used during the period of acute rehabilitation of the hands. With the development of modern smart homes, and enhancement electronics controlled by the computer, UIC-C could be integrated into a system that enables individuals with permanent impairment, the ability to control the cursor. In conclusion, the UIC-C device is designed with the goal of allowing the user to accurately control a cursor during the periods of either acute or permanent upper extremities impairment.

  3. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  4. Perception of synchronization errors in haptic and visual communications

    NASA Astrophysics Data System (ADS)

    Kameyama, Seiji; Ishibashi, Yutaka

    2006-10-01

    This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foslien, Wendy K.; Curtner, Keith L.

    Because of growing energy demands and shortages, residential home owners are turning to energy conservation measures and smart home energy management devices to help them reduce energy costs and live more sustainably. In this context, the Honeywell team researched, developed, and tested the Context Aware Smart Home Energy Manager (CASHEM) as a trusted advisor for home energy management. The project focused on connecting multiple devices in a home through a uniform user interface. The design of the user interface was an important feature of the project because it provided a single place for the homeowner to control all devices andmore » was also where they received coaching. CASHEM then used data collected from homes to identify the contexts that affect operation of home appliances. CASHEM's goal was to reduce energy consumption while keeping the user's key needs satisfied. Thus, CASHEM was intended to find the opportunities to minimize energy consumption in a way that fit the user's lifestyle.« less

  6. Rule-based interface generation on mobile devices for structured documentation.

    PubMed

    Kock, Ann-Kristin; Andersen, Björn; Handels, Heinz; Ingenerf, Josef

    2014-01-01

    In many software systems to date, interactive graphical user interfaces (GUIs) are represented implicitly in the source code, together with the application logic. Hence, the re-use, development, and modification of these interfaces is often very laborious. Flexible adjustments of GUIs for various platforms and devices as well as individual user preferences are furthermore difficult to realize. These problems motivate a software-based separation of content and GUI models on the one hand, and application logic on the other. In this project, a software solution for structured reporting on mobile devices is developed. Clinical content archetypes developed in a previous project serve as the content model while the Android SDK provides the GUI model. The necessary bindings between the models are specified using the Jess Rule Language.

  7. A study of usability principles and interface design for mobile e-books.

    PubMed

    Wang, Chao-Ming; Huang, Ching-Hua

    2015-01-01

    This study examined usability principles and interface designs in order to understand the relationship between the intentions of mobile e-book interface designs and users' perceptions. First, this study summarised 4 usability principles and 16 interface attributes, in order to conduct usability testing and questionnaire survey by referring to Nielsen (1993), Norman (2002), and Yeh (2010), who proposed the usability principles. Second, this study used the interviews to explore the perceptions and behaviours of user operations through senior users of multi-touch prototype devices. The results of this study are as follows: (1) users' behaviour of operating an interactive interface is related to user prior experience; (2) users' rating of the visibility principle is related to users' subjective perception but not related to user prior experience; however, users' ratings of the ease, efficiency, and enjoyment principles are related to user prior experience; (3) the interview survey reveals that the key attributes affecting users' behaviour of operating an interface include aesthetics, achievement, and friendliness. This study conducts experiments to explore the effects of users’ prior multi-touch experience on users’ behaviour of operating a mobile e-book interface and users’ rating of usability principles. Both qualitative and quantitative data analyses were performed. By applying protocol analysis, key attributes affecting users’ behaviour of operation were determined.

  8. Keeping Disability in Mind: A Case Study in Implantable Brain-Computer Interface Research.

    PubMed

    Sullivan, Laura Specker; Klein, Eran; Brown, Tim; Sample, Matthew; Pham, Michelle; Tubig, Paul; Folland, Raney; Truitt, Anjali; Goering, Sara

    2018-04-01

    Brain-Computer Interface (BCI) research is an interdisciplinary area of study within Neural Engineering. Recent interest in end-user perspectives has led to an intersection with user-centered design (UCD). The goal of user-centered design is to reduce the translational gap between researchers and potential end users. However, while qualitative studies have been conducted with end users of BCI technology, little is known about individual BCI researchers' experience with and attitudes towards UCD. Given the scientific, financial, and ethical imperatives of UCD, we sought to gain a better understanding of practical and principled considerations for researchers who engage with end users. We conducted a qualitative interview case study with neural engineering researchers at a center dedicated to the creation of BCIs. Our analysis generated five themes common across interviews. The thematic analysis shows that participants identify multiple beneficiaries of their work, including other researchers, clinicians working with devices, device end users, and families and caregivers of device users. Participants value experience with device end users, and personal experience is the most meaningful type of interaction. They welcome (or even encourage) end-user input, but are skeptical of limited focus groups and case studies. They also recognize a tension between creating sophisticated devices and developing technology that will meet user needs. Finally, interviewees espouse functional, assistive goals for their technology, but describe uncertainty in what degree of function is "good enough" for individual end users. Based on these results, we offer preliminary recommendations for conducting future UCD studies in BCI and neural engineering.

  9. Pointing Device Performance in Steering Tasks.

    PubMed

    Senanayake, Ransalu; Goonetilleke, Ravindra S

    2016-06-01

    Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.

  10. New ergonomic headset for Tongue-Drive System with wireless smartphone interface.

    PubMed

    Park, Hangue; Kim, Jeonghee; Huo, Xueliang; Hwang, In-O; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue-operated assistive technology (AT), developed for people with severe physical disabilities to control their environment using their tongue motion. We have developed a new ergonomic headset for the TDS with a user-friendly smartphone interface, through which users will be able to wirelessly control various devices, access computers, and drive wheelchairs. This headset design is expected to act as a flexible and multifunctional communication interface for the TDS and improve its usability, accessibility, aesthetics, and convenience for the end users.

  11. Secure content objects

    DOEpatents

    Evans, William D [Cupertino, CA

    2009-02-24

    A secure content object protects electronic documents from unauthorized use. The secure content object includes an encrypted electronic document, a multi-key encryption table having at least one multi-key component, an encrypted header and a user interface device. The encrypted document is encrypted using a document encryption key associated with a multi-key encryption method. The encrypted header includes an encryption marker formed by a random number followed by a derivable variation of the same random number. The user interface device enables a user to input a user authorization. The user authorization is combined with each of the multi-key components in the multi-key encryption key table and used to try to decrypt the encrypted header. If the encryption marker is successfully decrypted, the electronic document may be decrypted. Multiple electronic documents or a document and annotations may be protected by the secure content object.

  12. A Web Service and Interface for Remote Electronic Device Characterization

    ERIC Educational Resources Information Center

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  13. Universal Design and the Smart Home.

    PubMed

    Pennick, Tim; Hessey, Sue; Craigie, Roland

    2016-01-01

    The related concepts of Universal Design, Inclusive Design, and Design For All, all recognise that no one solution will fit the requirements of every possible user. This paper considers the extent to which current developments in smart home technology can help to reduce the numbers of users for whom mainstream technology is not sufficiently inclusive, proposing a flexible approach to user interface (UI) implementation focussed on the capabilities of the user. This implies development of the concepts underlying Universal Design to include the development of a flexible inclusive support infrastructure, servicing the requirements of individual users and their personalised user interface devices.

  14. User Interface Preferences in the Design of a Camera-Based Navigation and Wayfinding Aid

    ERIC Educational Resources Information Center

    Arditi, Aries; Tian, YingLi

    2013-01-01

    Introduction: Development of a sensing device that can provide a sufficient perceptual substrate for persons with visual impairments to orient themselves and travel confidently has been a persistent rehabilitation technology goal, with the user interface posing a significant challenge. In the study presented here, we enlist the advice and ideas of…

  15. 14 CFR 1215.102 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... necessary TDRSS operational areas, interface devices, and NASA communication circuits that unify the above... stream. The electronic signals acquired by TDRSS from the user craft or the user-generated input commands...

  16. Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D., II

    2002-01-01

    The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.

  17. Wireless device connection problems and design solutions

    NASA Astrophysics Data System (ADS)

    Song, Ji-Won; Norman, Donald; Nam, Tek-Jin; Qin, Shengfeng

    2016-09-01

    Users, especially the non-expert users, commonly experience problems when connecting multiple devices with interoperability. While studies on multiple device connections are mostly concentrated on spontaneous device association techniques with a focus on security aspects, the research on user interaction for device connection is still limited. More research into understanding people is needed for designers to devise usable techniques. This research applies the Research-through-Design method and studies the non-expert users' interactions in establishing wireless connections between devices. The "Learning from Examples" concept is adopted to develop a study focus line by learning from the expert users' interaction with devices. This focus line is then used for guiding researchers to explore the non-expert users' difficulties at each stage of the focus line. Finally, the Research-through-Design approach is used to understand the users' difficulties, gain insights to design problems and suggest usable solutions. When connecting a device, the user is required to manage not only the device's functionality but also the interaction between devices. Based on learning from failures, an important insight is found that the existing design approach to improve single-device interaction issues, such as improvements to graphical user interfaces or computer guidance, cannot help users to handle problems between multiple devices. This study finally proposes a desirable user-device interaction in which images of two devices function together with a system image to provide the user with feedback on the status of the connection, which allows them to infer any required actions.

  18. Interface Anywhere: Development of a Voice and Gesture System for Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby; Haddock, Maxwell; Overland, David

    2013-01-01

    The Interface Anywhere Project was funded through Innovation Charge Account (ICA) at NASA JSC in the Fall of 2012. The project was collaboration between human factors and engineering to explore the possibility of designing an interface to control basic habitat operations through gesture and voice control; (a) Current interfaces require the users to be physically near an input device in order to interact with the system; and (b) By using voice and gesture commands, the user is able to interact with the system anywhere they want within the work environment.

  19. User centered integration of Internet of Things devices

    NASA Astrophysics Data System (ADS)

    Manione, Roberto

    2017-06-01

    This paper discusses an IoT framework which allows rapid and easy setup and customization of end-to-end solutions for field data collection and presentation; it is effective in the development of both informative and transactional applications for a wide range of application fields, such as home, industry and environment. On the "far-end" of the chain are the IoT devices gathering the signals; they are developed used a full Model Based approach, where programming is not required: the TaskScript technology is used to this purpose, which supports a choice of physical boards and boxes equipped with a range of Input and Output interfaces, and with a Tcp/Ip interface. The development of the needed specific IoT devices takes advantage of the available "standard" hardware; the software development of the algorithms for sampling, conditioning and uploading signals to the Cloud is supported by a graphical-only IDE. On the "near-end" of the chain is the presentation Interface, through which users can browse through the information provided by their IoT devices; it is implemented in a Conversational way, using the Bot paradigm: Bots are conversational automatons, to whom users can "chat". They are accessed via mainstream Messenger programs, such as Telegram(C), Skype(C) or others, available on smartphones, tablets or desktops; unlike apps, bots do not need installation on the user device. A message Broker has been implemented, to mediate among the far-end and the near-end of the chain, providing the needed services; its behavior is driven by a set of rules provided on a per-device basis, at configuration level; the Broker is able to store messages received from the devices, process and forward them to the specified recipient(s) according to the provided rules; finally, finally is it is able to send transactional commands, from users back to the requested device, to implement not only field observation but also field control. IoT solutions implemented with the proposed solution are user friendly: users can literally "chat with their devices", asking for information, providing commands, and receiving alert notifications, all with their favorite (mobile) terminal. To demonstrate de effectiveness of the proposed scenario, several solutions have been set up for industrial applications; such "mobile dashboards" are presently used by managers and technicians to keep track of their machines and plants.

  20. Intelligent Context-Aware and Adaptive Interface for Mobile LBS

    PubMed Central

    Liu, Yanhong

    2015-01-01

    Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077

  1. Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory

    ERIC Educational Resources Information Center

    Jagodzinski, Piotr; Wolski, Robert

    2015-01-01

    Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar…

  2. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  3. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  4. Design and Evaluation of Shape-Changing Haptic Interfaces for Pedestrian Navigation Assistance.

    PubMed

    Spiers, Adam J; Dollar, Aaron M

    2017-01-01

    Shape-changing interfaces are a category of device capable of altering their form in order to facilitate communication of information. In this work, we present a shape-changing device that has been designed for navigation assistance. 'The Animotus' (previously, 'The Haptic Sandwich' ), resembles a cube with an articulated upper half that is able to rotate and extend (translate) relative to the bottom half, which is fixed in the user's grasp. This rotation and extension, generally felt via the user's fingers, is used to represent heading and proximity to navigational targets. The device is intended to provide an alternative to screen or audio based interfaces for visually impaired, hearing impaired, deafblind, and sighted pedestrians. The motivation and design of the haptic device is presented, followed by the results of a navigation experiment that aimed to determine the role of each device DOF, in terms of facilitating guidance. An additional device, 'The Haptic Taco', which modulated its volume in response to target proximity (negating directional feedback), was also compared. Results indicate that while the heading (rotational) DOF benefited motion efficiency, the proximity (translational) DOF benefited velocity. Combination of the two DOF improved overall performance. The volumetric Taco performed comparably to the Animotus' extension DOF.

  5. [Universal electrogustometer EG-2].

    PubMed

    Wałkanis, Andrzej; Czesak, Michał; Pleskacz, Witold A

    2011-01-01

    Electrogustometry is a method for taste diagnosis and measurement. The EG-2 project is being developed in cooperation between Warsaw University of Technology and Military institute of Medicine in Warsaw. The device is an evolution of the recent universal electrogustometer EG-1 prototype. Due to considerations and experiences acquired during prototype usage, many enhancements have been incorporated into device. The aim was to create an easy-to-use, portable, battery powered device, enabled for fast measurements. Developed electrogustometer is using innovative, low-power microprocessor system, which control whole device. User interface is based on 5.7" graphical LCD (Liquid Crystal Display) and touchscreen. It can be directly operated by finger or with optional stylus. Dedicated GUI (Graphical User Interface) offers simple, predefined measurements and advance settings of signal parameters. It is also possible to store measurements results and patients data in an internal memory. User interface is multilanguage. Signals for patients examinations, supplied with bipolar electrode, are generated by an on-board circuit using DDS (Direct-Digital Synthesis) and DAC (Digital-to-Analog Converter). Electrogustometer is able to generate DC, sinus, triangle or rectangle signals with current amplitude from 0 to 500 pA and frequency form 0 to 500 Hz. Device is designed for manual and automeasurement modes. By using USB (Universal Serial Bus) port it is possible to retrieve data stored in internal memory and charging of built-in Li-lon battery as a source of power.

  6. Device- and system-independent personal touchless user interface for operating rooms : One personal UI to control all displays in an operating room.

    PubMed

    Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir

    2016-06-01

    In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.

  7. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  8. Effects of Aging and Domain Knowledge on Usability in Small Screen Devices for Diabetes Patients

    NASA Astrophysics Data System (ADS)

    Calero Valdez, André; Ziefle, Martina; Horstmann, Andreas; Herding, Daniel; Schroeder, Ulrik

    Technology acceptance has become a key concept for the successful rollout of technical devices. Though the concept is intensively studied for nearly 20 years now, still, many open questions remain. This especially applies to technology acceptance of older users, which are known to be very sensitive to suboptimal interfaces and show considerable reservations towards the usage of new technology. Mobile small screen technology increasingly penetrates health care and medical applications. This study investigates impacts of aging, technology expertise and domain knowledge on user interaction using the example of diabetes. For this purpose user effectiveness and efficiency have been measured on a simulated small screen device and related to user characteristics, showing that age and technology expertise have a big impact on usability of the device. Furthermore, impacts of user characteristics and success during the trial on acceptance of the device were surveyed and analyzed.

  9. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons.

    PubMed

    N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo

    2017-08-01

    Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.

  10. Efficient Verification of Holograms Using Mobile Augmented Reality.

    PubMed

    Hartl, Andreas Daniel; Arth, Clemens; Grubert, Jens; Schmalstieg, Dieter

    2016-07-01

    Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users.

  11. Remote Adaptive Communication System

    DTIC Science & Technology

    2001-10-25

    manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device

  12. The empty OR-process analysis and a new concept for flexible and modular use in minimal invasive surgery.

    PubMed

    Eckmann, Christian; Olbrich, Guenter; Shekarriz, Hodjat; Bruch, Hans-Peter

    2003-01-01

    The reproducible advantages of minimal invasive surgery have led to a worldwide spread of these techniques. Nevertheless, the increasing use of technology causes problems in the operating room (OR). The workstation environment and workflow are handicapped by a great number of isolated solutions that demand a large amount of space. The Center of Excellence in Medical Technology (CEMET) was established in 2001 as an institution for a close cooperation between users, science, and manufacturers of medical devices in the State of Schleswig-Holstein, Germany. The future OR, as a major project, began with a detailed process analysis, which disclosed a large number of medical devices with different interfaces and poor standardisation as main problems. Smaller and more flexible devices are necessary, as well as functional modules located outside the OR. Only actuators should be positioned near the operation area. The future OR should include a flexible-room concept and less equipment than is in use currently. A uniform human-user interface is needed to control the OR environment. This article addresses the need for a clear workspace environment, intelligent-user interfaces, and flexible-room concept to improve the potentials in use of minimal invasive surgery.

  13. Application of SQL database to the control system of MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo

    2006-06-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.

  14. The Body-Machine Interface: A new perspective on an old theme

    PubMed Central

    Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465

  15. Exoskeletal meal assistance system (EMAS II) for progressive muscle dystrophy patient.

    PubMed

    Hasegawa, Yasuhisa; Oura, Saori

    2011-01-01

    This paper introduces a 4-DOFs exoskeletal meal assistance system (EMAS II) for progressive muscle dystrophy patient. It is generally better for the patient to use his/her hands by himself in daily life because active works maintain level of residual functions, health and initiative of him/her. The EMAS II that has a new joystick-type user interface device and three-DOFs on a shoulder part is enhanced for an easier operation and more comfortable support on eating, as the succeeding model of the previous system that has two-DOFs on a shoulder. In order to control the 4-DOFs system by the simple user interface device, the EMAS II simulates upper limb motion patterns of a healthy person. The motion patterns are modeled by extracting correlations between the height of a user's wrist joint and that of the user's elbow joint at the table. Moreover, the EMAS II automatically brings user's hand up to his/her mouth or back to a table when he/she pushes a preset switch on the interface device. Therefore a user has only to control a position of his/her wrist to pick or scoop foods and then flip the switch to start automatic mode, while a height of the elbow joint is automatically controlled by the EMAS II itself. The results of experiments, where a healthy subject regarded as a muscle dystrophy patient eats a meal with EMAS II, show that the subject finished her meal in a natural way in 18 minutes 40 seconds which was within a recommended time of 30 minutes. © 2011 IEEE

  16. Data storage technology: Hardware and software, Appendix B

    NASA Technical Reports Server (NTRS)

    Sable, J. D.

    1972-01-01

    This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.

  17. Design and evaluation of nonverbal sound-based input for those with motor handicapped.

    PubMed

    Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong

    2013-03-01

    Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.

  18. Method and system for providing work machine multi-functional user interface

    DOEpatents

    Hoff, Brian D [Peoria, IL; Akasam, Sivaprasad [Peoria, IL; Baker, Thomas M [Peoria, IL

    2007-07-10

    A method is performed to provide a multi-functional user interface on a work machine for displaying suggested corrective action. The process includes receiving status information associated with the work machine and analyzing the status information to determine an abnormal condition. The process also includes displaying a warning message on the display device indicating the abnormal condition and determining one or more corrective actions to handle the abnormal condition. Further, the process includes determining an appropriate corrective action among the one or more corrective actions and displaying a recommendation message on the display device reflecting the appropriate corrective action. The process may also include displaying a list including the remaining one or more corrective actions on the display device to provide alternative actions to an operator.

  19. Design and usability evaluation of user-centered and visual-based aids for dietary food measurement on mobile devices in a randomized controlled trial.

    PubMed

    Liu, Ying-Chieh; Chen, Chien-Hung; Lee, Chien-Wei; Lin, Yu-Sheng; Chen, Hsin-Yun; Yeh, Jou-Yin; Chiu, Sherry Yueh-Hsia

    2016-12-01

    We designed and developed two interactive apps interfaces for dietary food measurements on mobile devices. The user-centered designs of both the IPI (interactive photo interface) and the SBI (sketching-based interface) were evaluated. Four types of outcomes were assessed to evaluate the usability of mobile devices for dietary measurements, including accuracy, absolute weight differences, and the response time to determine the efficacy of food measurements. The IPI presented users with images of pre-determined portion sizes of a specific food and allowed users to scan and then select the most representative image matching the food that they were measuring. The SBI required users to relate the food shape to a readily available comparator (e.g., credit card) and scribble to shade in the appropriate area. A randomized controlled trial was conducted to evaluate their usability. A total of 108 participants were randomly assigned into the following three groups: the IPI (n=36) and SBI (n=38) experimental groups and the traditional life-size photo (TLP) group as the control. A total of 18 types of food items with 3-4 different weights were randomly selected for assessment by each type. The independent Chi-square test and t-test were performed for the dichotomous and continuous variable analyses, respectively. The total accuracy rates were 66.98%, 44.15%, and 72.06% for the IPI, SBI, and TLP, respectively. No significant difference was observed between the IPI and TLP, regardless of the accuracy proportion or weight differences. The SBI accuracy rates were significantly lower than the IPI and TLP accuracy rates, especially for several spooned, square cube, and sliced pie food items. The time needed to complete the operation assessment by the user was significantly lower for the IPI than for the SBI. Our study corroborates that the user-centered visual-based design of the IPI on a mobile device is comparable the TLP in terms of the usability for dietary food measurements. However, improvements are needed because both the IPI and TLP accuracies associated with some food shapes were lower than 60%. The SBI is not yet a viable aid. This innovative alternative required further improvements to the user interface. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Multimodal browsing using VoiceXML

    NASA Astrophysics Data System (ADS)

    Caccia, Giuseppe; Lancini, Rosa C.; Peschiera, Giuseppe

    2003-06-01

    With the increasing development of devices such as personal computers, WAP and personal digital assistants connected to the World Wide Web, end users feel the need to browse the Internet through multiple modalities. We intend to investigate on how to create a user interface and a service distribution platform granting the user access to the Internet through standard I/O modalities and voice simultaneously. Different architectures are evaluated suggesting the more suitable for each client terminal (PC o WAP). In particular the design of the multimodal usermachine interface considers the synchronization issue between graphical and voice contents.

  1. Portable multiplicity counter

    DOEpatents

    Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM

    2009-09-01

    A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.

  2. Adaptive Motor Resistance Video Game Exercise Apparatus and Method of Use Thereof

    NASA Technical Reports Server (NTRS)

    Reich, Alton (Inventor); Shaw, James (Inventor)

    2015-01-01

    The invention comprises a method and/or an apparatus using computer configured exercise equipment and an electric motor provided physical resistance in conjunction with a game system, such as a video game system, where the exercise system provides real physical resistance to a user interface. Results of user interaction with the user interface are integrated into a video game, such as running on a game console. The resistance system comprises: a subject interface, software control, a controller, an electric servo assist/resist motor, an actuator, and/or a subject sensor. The system provides actual physical interaction with a resistance device as input to the game console and game run thereon.

  3. A Voice and Mouse Input Interface for 3D Virtual Environments

    NASA Technical Reports Server (NTRS)

    Kao, David L.; Bryson, Steve T.

    2003-01-01

    There have been many successful stories on how 3D input devices can be fully integrated into an immersive virtual environment. Electromagnetic trackers, optical trackers, gloves, and flying mice are just some of these input devices. Though we can use existing 3D input devices that are commonly used for VR applications, there are several factors that prevent us from choosing these input devices for our applications. One main factor is that most of these tracking devices are not suitable for prolonged use due to human fatigue associated with using them. A second factor is that many of them would occupy additional office space. Another factor is that many of the 3D input devices are expensive due to the unusual hardware that are required. For our VR applications, we want a user interface that would work naturally with standard equipment. In this paper, we demonstrate applications or our proposed muitimodal interface using a 3D dome display. We also show that effective data analysis can be achieved while the scientists view their data rendered inside the dome display and perform user interactions simply using the mouse and voice input. Though the sphere coordinate grid seems to be ideal for interaction using a 3D dome display, we can also use other non-spherical grids as well.

  4. Mobile health IT: the effect of user interface and form factor on doctor-patient communication.

    PubMed

    Alsos, Ole Andreas; Das, Anita; Svanæs, Dag

    2012-01-01

    Introducing computers into primary care can have negative effects on the doctor-patient dialogue. Little is known about such effects of mobile health IT in point-of-care situations. To assess how different mobile information devices used by physicians in point-of-care situations support or hinder aspects of doctor-patient communication, such as face-to-face dialogue, nonverbal communication, and action transparency. The study draws on two different experimental simulation studies where 22 doctors, in 80 simulated ward rounds, accessed patient-related information from a paper chart, a PDA, and a laptop mounted on a trolley. Video recordings from the simulations were analyzed qualitatively. Interviews with clinicians and patients were used to triangulate the findings and to verify the realism and results of the simulations. The paper chart afforded smooth re-establishment of eye contact, better verbal and non-verbal contact, more gesturing, good visibility of actions, and quick information retrieval. The digital information devices lacked many of these affordances; physicians' actions were not visible for the patients, the user interfaces required much attention, gesturing was harder, and re-establishment of eye contact took more time. Physicians used the devices to display their actions to the patients. The analysis revealed that the findings were related to the user interface and form factor of the information devices, as well as the personal characteristics of the physician. When information is needed and has to be located at the point-of-care, the user interface and the physical form factor of the mobile information device are influential elements for successful collaboration between doctors and patients. Both elements need to be carefully designed so that physicians can use the devices to support face-to-face dialogue, nonverbal communication, and action visibility. The ability to facilitate and support the doctor-patient collaboration is a noteworthy usability factor in the design of mobile EPR systems. The paper also presents possible design guidelines for mobile point-of-care systems for improved doctor-patient communication. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Next Gen One Portal Usability Evaluation

    NASA Technical Reports Server (NTRS)

    Cross, E. V., III; Perera, J. S.; Hanson, A. M.; English, K.; Vu, L.; Amonette, W.

    2018-01-01

    Each exercise device on the International Space Station (ISS) has a unique, customized software system interface with unique layouts / hierarchy, and operational principles that require significant crew training. Furthermore, the software programs are not adaptable and provide no real-time feedback or motivation to enhance the exercise experience and/or prevent injuries. Additionally, the graphical user interfaces (GUI) of these systems present information through multiple layers resulting in difficulty navigating to the desired screens and functions. These limitations of current exercise device GUI's lead to increased crew time spent on initiating, loading, performing exercises, logging data and exiting the system. To address these limitations a Next Generation One Portal (NextGen One Portal) Crew Countermeasure System (CMS) was developed, which utilizes the latest industry guidelines in GUI designs to provide an intuitive ease of use approach (i.e., 80% of the functionality gained within 5-10 minutes of initial use without/limited formal training required). This is accomplished by providing a consistent interface using common software to reduce crew training, increase efficiency & user satisfaction while also reducing development & maintenance costs. Results from the usability evaluations showed the NextGen One Portal UI having greater efficiency, learnability, memorability, usability and overall user experience than the current Advanced Resistive Exercise Device (ARED) UI used by astronauts on ISS. Specifically, the design of the One-Portal UI as an app interface similar to those found on the Apple and Google's App Store, assisted many of the participants in grasping the concepts of the interface with minimum training. Although the NextGen One-Portal UI was shown to be an overall better interface, observations by the test facilitators noted specific exercise tasks appeared to have a significant impact on the NextGen One-Portal UI efficiency. Future updates to the NextGen One Portal UI will address these inefficiencies.

  6. User acquaintance with mobile interfaces.

    PubMed

    Ehrler, Frederic; Walesa, Magali; Sarrey, Evelyne; Wipfli, Rolf; Lovis, Christian

    2013-01-01

    Handheld technology finds slowly its place in the healthcare world. Some clinicians already use intensively dedicated mobile applications to consult clinical references. However, handheld technology hasn't still broadly embraced to the core of the healthcare business, the hospitals. The weak penetration of handheld technology in the hospitals can be partly explained by the caution of stakeholders that must be convinced about the efficiency of these tools before going forward. In a domain where temporal constraints are increasingly strong, caregivers cannot loose time on playing with gadgets. All users are not comfortable with tactile manipulations and the lack of dedicated peripheral complicates entering data for novices. Stakeholders must be convinced that caregivers will be able to master handheld devices. In this paper, we make the assumption that the proper design of an interface may influence users' performances to record information. We are also interested to find out whether users increase their efficiency when using handheld tools repeatedly. To answer these questions, we have set up a field study to compare users' performances on three different user interfaces while recording vital signs. Some user interfaces were familiar to users, and others were totally innovative. Results showed that users' familiarity with smartphone influences their performances and that users improve their performances by repeating a task.

  7. Intuitive control of mobile robots: an architecture for autonomous adaptive dynamic behaviour integration.

    PubMed

    Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide

    2018-05-01

    In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.

  8. The effects of time delays on a telepathology user interface.

    PubMed Central

    Carr, D.; Hasegawa, H.; Lemmon, D.; Plaisant, C.

    1992-01-01

    Telepathology enables a pathologist to examine physically distant tissue samples by microscope operation over a communication link. Communication links can impose time delays which cause difficulties in controlling the remote device. Such difficulties were found in a microscope teleoperation system. Since the user interface is critical to pathologist's acceptance of telepathology, we redesigned the user interface for this system, built two different versions (a keypad whose movement commands operated by specifying a start command followed by a stop command and a trackball interface whose movement commands were incremental and directly proportional to the rotation of the trackball). We then conducted a pilot study to determine the effect of time delays on the new user interfaces. In our experiment, the keypad was the faster interface when the time delay is short. There was no evidence to favor either the keypad or trackball when the time delay was longer. Inexperienced participants benefitted by allowing them to move long distances over the microscope slide by dragging the field-of-view indicator on the touchscreen control panel. The experiment suggests that changes could be made to the trackball interface which would improve its performance. PMID:1482878

  9. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.

    PubMed

    Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando

    2017-08-01

    Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. GEECS (Generalized Equipment and Experiment Control System)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GONSALVES, ANTHONY; DESHMUKH, AALHAD

    2017-01-12

    GEECS (Generalized Equipment and Experiment Control System) monitors and controls equipment distributed across a network, performs experiments by scanning input variables, and collects and stores various types of data synchronously from devices. Examples of devices include cameras, motors and pressure gauges. GEEKS is based upon LabView graphical object oriented programming (GOOP), allowing for a modular and scalable framework. Data is published for subscription of an arbitrary number of variables over TCP. A secondary framework allows easy development of graphical user interfaces for a combined control of any available devices on the control system without the need of programming knowledge. Thismore » allows for rapid integration of GEECS into a wide variety of systems. A database interface provides for devise and process configuration while allowing the user to save large quantities of data to local or network drives.« less

  11. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands

    PubMed Central

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961

  12. Human-Machine Interface for the Control of Multi-Function Systems Based on Electrocutaneous Menu: Application to Multi-Grasp Prosthetic Hands.

    PubMed

    Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario

    2015-01-01

    Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.

  13. Towards an intelligent wheelchair system for users with cerebral palsy.

    PubMed

    Montesano, Luis; Díaz, Marta; Bhaskar, Sonu; Minguez, Javier

    2010-04-01

    This paper describes and evaluates an intelligent wheelchair, adapted for users with cognitive disabilities and mobility impairment. The study focuses on patients with cerebral palsy, one of the most common disorders affecting muscle control and coordination, thereby impairing movement. The wheelchair concept is an assistive device that allows the user to select arbitrary local destinations through a tactile screen interface. The device incorporates an automatic navigation system that drives the vehicle, avoiding obstacles even in unknown and dynamic scenarios. It provides the user with a high degree of autonomy, independent from a particular environment, i.e., not restricted to predefined conditions. To evaluate the rehabilitation device, a study was carried out with four subjects with cognitive impairments, between 11 and 16 years of age. They were first trained so as to get acquainted with the tactile interface and then were recruited to drive the wheelchair. Based on the experience with the subjects, an extensive evaluation of the intelligent wheelchair was provided from two perspectives: 1) based on the technical performance of the entire system and its components and 2) based on the behavior of the user (execution analysis, activity analysis, and competence analysis). The results indicated that the intelligent wheelchair effectively provided mobility and autonomy to the target population.

  14. Development of an implantable wireless ECoG 128ch recording device for clinical brain machine interface.

    PubMed

    Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi; Ando, Hiroshi; Ota, Yuki; Sato, Fumihiro; Morris, Shyne; Yoshida, Takeshi; Matsuki, Hidetoshi; Yoshimine, Toshiki

    2013-01-01

    Brain Machine Interface (BMI) is a system that assumes user's intention by analyzing user's brain activities and control devices with the assumed intention. It is considered as one of prospective tools to enhance paralyzed patients' quality of life. In our group, we especially focus on ECoG (electro-corti-gram)-BMI, which requires surgery to place electrodes on the cortex. We try to implant all the devices within the patient's head and abdomen and to transmit the data and power wirelessly. Our device consists of 5 parts: (1) High-density multi-electrodes with a 3D shaped sheet fitting to the individual brain surface to effectively record the ECoG signals; (2) A small circuit board with two integrated circuit chips functioning 128 [ch] analogue amplifiers and A/D converters for ECoG signals; (3) A Wifi data communication & control circuit with the target PC; (4) A non-contact power supply transmitting electrical power minimum 400[mW] to the device 20[mm] away. We developed those devices, integrated them, and, investigated the performance.

  15. The use of ambient audio to increase safety and immersion in location-based games

    NASA Astrophysics Data System (ADS)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.

  16. Brain-controlled body movement assistance devices and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leuthardt, Eric C.; Love, Lonnie J.; Coker, Rob

    Methods, devices, systems, and apparatus, including computer programs encoded on a computer storage medium, for brain-controlled body movement assistance devices. In one aspect, a device includes a brain-controlled body movement assistance device with a brain-computer interface (BCI) component adapted to be mounted to a user, a body movement assistance component operably connected to the BCI component and adapted to be worn by the user, and a feedback mechanism provided in connection with at least one of the BCI component and the body movement assistance component, the feedback mechanism being configured to output information relating to a usage session of themore » brain-controlled body movement assistance device.« less

  17. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    PubMed

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  18. Methods for studying medical device technology and practitioner cognition: the case of user-interface issues with infusion pumps.

    PubMed

    Schraagen, Jan Maarten; Verhoeven, Fenne

    2013-02-01

    The aims of this study were to investigate how a variety of research methods is commonly employed to study technology and practitioner cognition. User-interface issues with infusion pumps were selected as a case because of its relevance to patient safety. Starting from a Cognitive Systems Engineering perspective, we developed an Impact Flow Diagram showing the relationship of computer technology, cognition, practitioner behavior, and system failure in the area of medical infusion devices. We subsequently conducted a systematic literature review on user-interface issues with infusion pumps, categorized the studies in terms of methods employed, and noted the usability problems found with particular methods. Next, we assigned usability problems and related methods to the levels in the Impact Flow Diagram. Most study methods used to find user interface issues with infusion pumps focused on observable behavior rather than on how artifacts shape cognition and collaboration. A concerted and theory-driven application of these methods when testing infusion pumps is lacking in the literature. Detailed analysis of one case study provided an illustration of how to apply the Impact Flow Diagram, as well as how the scope of analysis may be broadened to include organizational and regulatory factors. Research methods to uncover use problems with technology may be used in many ways, with many different foci. We advocate the adoption of an Impact Flow Diagram perspective rather than merely focusing on usability issues in isolation. Truly advancing patient safety requires the systematic adoption of a systems perspective viewing people and technology as an ensemble, also in the design of medical device technology. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Workshop AccessibleTV "Accessible User Interfaces for Future TV Applications"

    NASA Astrophysics Data System (ADS)

    Hahn, Volker; Hamisu, Pascal; Jung, Christopher; Heinrich, Gregor; Duarte, Carlos; Langdon, Pat

    Approximately half of the elderly people over 55 suffer from some type of typically mild visual, auditory, motor or cognitive impairment. For them interaction, especially with PCs and other complex devices is sometimes challenging, although accessible ICT applications could make much of a difference for their living quality. Basically they have the potential to enable or simplify participation and inclusion in their surrounding private and professional communities. However, the availability of accessible user interfaces being capable to adapt to the specific needs and requirements of users with individual impairments is very limited. Although there are a number of APIs [1, 2, 3, 4] available for various platforms that allow developers to provide accessibility features within their applications, today none of them provides features for the automatic adaptation of multimodal interfaces being capable to automatically fit the individual requirements of users with different kinds of impairments. Moreover, the provision of accessible user interfaces is still expensive and risky for application developers, as they need special experience and effort for user tests. Today many implementations simply neglect the needs of elderly people, thus locking out a large portion of their potential users. The workshop is organized as part of the dissemination activity for the European-funded project GUIDE "Gentle user interfaces for elderly people", which aims to address this situation with a comprehensive approach for the realization of multimodal user interfaces being capable to adapt to the needs of users with different kinds of mild impairments. As application platform, GUIDE will mainly target TVs and Set-Top Boxes, such as the emerging Connected-TV or WebTV platforms, as they have the potential to address the needs of the elderly users with applications such as for home automation, communication or continuing education.

  20. Discriminating Tissue Stiffness with a Haptic Catheter: Feeling the Inside of the Beating Heart.

    PubMed

    Kesner, Samuel B; Howe, Robert D

    2011-01-01

    Catheter devices allow physicians to access the inside of the human body easily and painlessly through natural orifices and vessels. Although catheters allow for the delivery of fluids and drugs, the deployment of devices, and the acquisition of the measurements, they do not allow clinicians to assess the physical properties of tissue inside the body due to the tissue motion and transmission limitations of the catheter devices, including compliance, friction, and backlash. The goal of this research is to increase the tactile information available to physicians during catheter procedures by providing haptic feedback during palpation procedures. To accomplish this goal, we have developed the first motion compensated actuated catheter system that enables haptic perception of fast moving tissue structures. The actuated catheter is instrumented with a distal tip force sensor and a force feedback interface that allows users to adjust the position of the catheter while experiencing the forces on the catheter tip. The efficacy of this device and interface is evaluated through a psychophyisical study comparing how accurately users can differentiate various materials attached to a cardiac motion simulator using the haptic device and a conventional manual catheter. The results demonstrate that haptics improves a user's ability to differentiate material properties and decreases the total number of errors by 50% over the manual catheter system.

  1. Quantitative Analysis Of User Interfaces For Large Electronic Home Appliances And Mobile Devices Based On Lifestyle Categorization Of Older Users.

    PubMed

    Shin, Wonkyoung; Park, Minyong

    2017-01-01

    Background/Study Context: The increasing longevity and health of older users as well as aging populations has created the need to develop senior-oriented product interfaces. This study aims to find user interface (UI) priorities according to older user groups based on their lifestyle and develop quality of UI (QUI) models for large electronic home appliances and mobile products. A segmentation table designed to show how older users can be categorized was created through a review of the literature to survey 252 subjects with a questionnaire. Factor analysis was performed to extract six preliminary lifestyle factors, which were then used for subsequent cluster analysis. The analysis resulted in four groups. Cross-analysis was carried out to investigate which characteristics were included in the groups. Analysis of variance was then applied to investigate the differences in the UI priorities among the user groups for various electronic devices. Finally, QUI models were developed and applied to those electronic devices. Differences in UI priorities were found according to the four lifestyles ("money-oriented," "innovation-oriented," "stability- and simplicity-oriented," and "innovation- and intellectual-oriented"). Twelve QUI models were developed for four different lifestyle groups associated with different products. Three washers and three smartphones were used as an example for testing the QUI models. The UI differences of the older user groups by the segmentation in this study using several key (i.e., demographic, socioeconomic, and physical-cognitive) variables are distinct from earlier studies made by a single variable. The differences in responses clearly indicate the benefits of integrating various factors of older users, rather than single variable, in order to design and develop more innovative and better consumer products in the future. The results of this study showed that older users with a potentially high buying power in the future are likely to have higher satisfaction when selecting products customized for their lifestyle. Designers could also use the results of UI evaluation for older users based on their lifestyle before developing products through QUI modeling. This approach would save time and costs.

  2. Serial Interface through Stream Protocol on EPICS Platform for Distributed Control and Monitoring

    NASA Astrophysics Data System (ADS)

    Das Gupta, Arnab; Srivastava, Amit K.; Sunil, S.; Khan, Ziauddin

    2017-04-01

    Remote operation of any equipment or device is implemented in distributed systems in order to control and proper monitoring of process values. For such remote operations, Experimental Physics and Industrial Control System (EPICS) is used as one of the important software tool for control and monitoring of a wide range of scientific parameters. A hardware interface is developed for implementation of EPICS software so that different equipment such as data converters, power supplies, pump controllers etc. could be remotely operated through stream protocol. EPICS base was setup on windows as well as Linux operating system for control and monitoring while EPICS modules such as asyn and stream device were used to interface the equipment with standard RS-232/RS-485 protocol. Stream Device protocol communicates with the serial line with an interface to asyn drivers. Graphical user interface and alarm handling were implemented with Motif Editor and Display Manager (MEDM) and Alarm Handler (ALH) command line channel access utility tools. This paper will describe the developed application which was tested with different equipment and devices serially interfaced to the PCs on a distributed network.

  3. A Multi-purpose Brain-Computer Interface Output Device

    PubMed Central

    Thompson, David E; Huggins, Jane E

    2012-01-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as standalone communication and control systems, rather than as interfaces to existing systems built for these purposes. While an individual communication and control system may be powerful or flexible, no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCIs could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e. without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems. PMID:22208120

  4. A multi-purpose brain-computer interface output device.

    PubMed

    Thompson, David E; Huggins, Jane E

    2011-10-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as stand-alone communication and control systems, rather than as interfaces to existing systems built for these purposes. An individual communication and control system may be powerful or flexible, but no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCls could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e., without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems.

  5. A prototype for communitising technology: Development of a smart salt water desalination device

    NASA Astrophysics Data System (ADS)

    Fakharuddin, F. M.; Fatchurrohman, N.; Puteh, S.; Puteri, H. M. A. R.

    2018-04-01

    Desalination is defined as the process that removes minerals from saline water or commonly known as salt water. Seawater desalination is becoming an attractive source of drinking water in coastal states as the costs for desalination declines. The purpose of this study is to develop a small scale desalination device and able to do an analysis of the process flow by using suitable sensors. Thermal technology was used to aid the desalination process. A graphical user interface (GUI) for the interface was made to enable the real time data analysis of the desalination device. ArduinoTM microcontroller was used in this device in order to develop an automatic device.

  6. Passive wireless tags for tongue controlled assistive technology interfaces.

    PubMed

    Rakibet, Osman O; Horne, Robert J; Kelly, Stephen W; Batchelor, John C

    2016-03-01

    Tongue control with low profile, passive mouth tags is demonstrated as a human-device interface by communicating values of tongue-tag separation over a wireless link. Confusion matrices are provided to demonstrate user accuracy in targeting by tongue position. Accuracy is found to increase dramatically after short training sequences with errors falling close to 1% in magnitude with zero missed targets. The rate at which users are able to learn accurate targeting with high accuracy indicates that this is an intuitive device to operate. The significance of the work is that innovative very unobtrusive, wireless tags can be used to provide intuitive human-computer interfaces based on low cost and disposable mouth mounted technology. With the development of an appropriate reading system, control of assistive devices such as computer mice or wheelchairs could be possible for tetraplegics and others who retain fine motor control capability of their tongues. The tags contain no battery and are intended to fit directly on the hard palate, detecting tongue position in the mouth with no need for tongue piercings.

  7. On-line data display

    NASA Astrophysics Data System (ADS)

    Lang, Sherman Y. T.; Brooks, Martin; Gauthier, Marc; Wein, Marceli

    1993-05-01

    A data display system for embedded realtime systems has been developed for use as an operator's user interface and debugging tool. The motivation for development of the On-Line Data Display (ODD) have come from several sources. In particular the design reflects the needs of researchers developing an experimental mobile robot within our laboratory. A proliferation of specialized user interfaces revealed a need for a flexible communications and graphical data display system. At the same time the system had to be readily extensible for arbitrary graphical display formats which would be required for data visualization needs of the researchers. The system defines a communication protocol transmitting 'datagrams' between tasks executing on the realtime system and virtual devices displaying the data in a meaningful way on a graphical workstation. The communication protocol multiplexes logical channels on a single data stream. The current implementation consists of a server for the Harmony realtime operating system and an application written for the Macintosh computer. Flexibility requirements resulted in a highly modular server design, and a layered modular object- oriented design for the Macintosh part of the system. Users assign data types to specific channels at run time. Then devices are instantiated by the user and connected to channels to receive datagrams. The current suite of device types do not provide enough functionality for most users' specialized needs. Instead the system design allows the creation of new device types with modest programming effort. The protocol, design and use of the system are discussed.

  8. Adapting human-machine interfaces to user performance.

    PubMed

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2008-01-01

    The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.

  9. Study of the human/ITS interface issues on the design of traffic information bulletin board and traffic control signal displays

    DOT National Transportation Integrated Search

    2002-10-01

    The success of automation for intelligent transportation systems is ultimately contingent upon the Interface between the users (humans) and the system (ITS). The issues of variable message signs (VMS) and traffic signal device (TSD) design were studi...

  10. Defining brain-machine interface applications by matching interface performance with device requirements.

    PubMed

    Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo

    2008-01-15

    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.

  11. The CDS at the Age of Multitouch Interfaces and Mobility

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Boch, T.; Fernique, P.; Kaestlé, V.

    2012-09-01

    Currently, we are witnessing a rapid evolution of new human-machine interfaces based on the widespread use of multitouch screens. This evolution is not just a replacement of the mouse-keyboard couple but requires a recast of the interfaces to take advantage of the new features (example: simultaneous selections in different parts of the screen). Traditional operating systems (mostly Windows and Linux) are also moving towards the integration of multitouch. It is possible in Windows7, also in Ubuntu (since release 10.10). The user interfaces of existing applications should be deeply impacted, as it is not just an adaptation of the existing ones: it is a transition from a selection in menus, click on button, to an intuitive based interaction. In this context the use of the semantics could help to understand what the user wants to do and to simplify the interfaces. The number of mobile devices (Smartphones based on iPhoneOS, AndroidOS and others, tablet computers (iPad, Galaxy Tab, etc.) is growing exponentially with a sustained frequency of replacement (18 months for a device). Smartphones provide an access to Web services but also to dedicated applications (available on App Store, Android Market, etc.). Investment in human resources to provide services on mobile devices could be limited in the first case (a simple adaptation of existing Web pages), but is higher in the case of dedicated applications (software development for a given operating system and the porting to other systems to achieve sufficient diffusion). Following this step, we have developed an Aladin Allsky lite application for Android, SkySurveys. This application is based on HEALPix and it was a real challenge to provide a tool with good display performances on a basic hardware device compared to a desktop or a laptop. We are now focusing the study on the use of HTML5, an emerging technology supported by recent versions of Internet browsers, which can provide rich content. HTML5 has the advantage of allowing developments independent of the mobile platform (‘write once, run everywhere’). We also expect broadening of the user of the services to new audiences and in particular to the educational community through new interface user-friendlier in terms of usability and interaction.

  12. Dialogue enabling speech-to-text user assistive agent system for hearing-impaired person.

    PubMed

    Lee, Seongjae; Kang, Sunmee; Han, David K; Ko, Hanseok

    2016-06-01

    A novel approach for assisting bidirectional communication between people of normal hearing and hearing-impaired is presented. While the existing hearing-impaired assistive devices such as hearing aids and cochlear implants are vulnerable in extreme noise conditions or post-surgery side effects, the proposed concept is an alternative approach wherein spoken dialogue is achieved by means of employing a robust speech recognition technique which takes into consideration of noisy environmental factors without any attachment into human body. The proposed system is a portable device with an acoustic beamformer for directional noise reduction and capable of performing speech-to-text transcription function, which adopts a keyword spotting method. It is also equipped with an optimized user interface for hearing-impaired people, rendering intuitive and natural device usage with diverse domain contexts. The relevant experimental results confirm that the proposed interface design is feasible for realizing an effective and efficient intelligent agent for hearing-impaired.

  13. Remapping residual coordination for controlling assistive devices and recovering motor functions.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias B; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2015-12-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human-machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  15. Matching brain-machine interface performance to space applications.

    PubMed

    Citi, Luca; Tonet, Oliver; Marinelli, Martina

    2009-01-01

    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.

  16. 78 FR 36478 - Accessibility of User Interfaces, and Video Programming Guides and Menus

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-18

    ... equipment: ``digital apparatus'' and ``navigation devices.'' Specifically, section 204 applies to ``digital... apparatus, including equipment purchased at retail by a consumer to access video programming, would be..., and video programming guides, and menus provided by digital apparatus and navigation devices are...

  17. A Smart and Balanced Energy-Efficient Multihop Clustering Algorithm (Smart-BEEM) for MIMO IoT Systems in Future Networks.

    PubMed

    Xu, Lina; O'Hare, Gregory M P; Collier, Rem

    2017-07-05

    Wireless Sensor Networks (WSNs) are typically composed of thousands of sensors powered by limited energy resources. Clustering techniques were introduced to prolong network longevity offering the promise of green computing. However, most existing work fails to consider the network coverage when evaluating the lifetime of a network. We believe that balancing the energy consumption in per unit area rather than on each single sensor can provide better-balanced power usage throughout the network. Our former work-Balanced Energy-Efficiency (BEE) and its Multihop version BEEM can not only extend the network longevity, but also maintain the network coverage. Following WSNs, Internet of Things (IoT) technology has been proposed with higher degree of diversities in terms of communication abilities and user scenarios, supporting a large range of real world applications. The IoT devices are embedded with multiple communication interfaces, normally referred as Multiple-In and Multiple-Out (MIMO) in 5G networks. The applications running on those devices can generate various types of data. Every interface has its own characteristics, which may be preferred and beneficial in some specific user scenarios. With MIMO becoming more available on the IoT devices, an advanced clustering solution for highly dynamic IoT systems is missing and also pressingly demanded in order to cater for differing user applications. In this paper, we present a smart clustering algorithm (Smart-BEEM) based on our former work BEE(M) to accomplish energy efficient and Quality of user Experience (QoE) supported communication in cluster based IoT networks. It is a user behaviour and context aware approach, aiming to facilitate IoT devices to choose beneficial communication interfaces and cluster headers for data transmission. Experimental results have proved that Smart-BEEM can further improve the performance of BEE and BEEM for coverage sensitive longevity.

  18. A Smart and Balanced Energy-Efficient Multihop Clustering Algorithm (Smart-BEEM) for MIMO IoT Systems in Future Networks †

    PubMed Central

    O’Hare, Gregory M. P.; Collier, Rem

    2017-01-01

    Wireless Sensor Networks (WSNs) are typically composed of thousands of sensors powered by limited energy resources. Clustering techniques were introduced to prolong network longevity offering the promise of green computing. However, most existing work fails to consider the network coverage when evaluating the lifetime of a network. We believe that balancing the energy consumption in per unit area rather than on each single sensor can provide better-balanced power usage throughout the network. Our former work—Balanced Energy-Efficiency (BEE) and its Multihop version BEEM can not only extend the network longevity, but also maintain the network coverage. Following WSNs, Internet of Things (IoT) technology has been proposed with higher degree of diversities in terms of communication abilities and user scenarios, supporting a large range of real world applications. The IoT devices are embedded with multiple communication interfaces, normally referred as Multiple-In and Multiple-Out (MIMO) in 5G networks. The applications running on those devices can generate various types of data. Every interface has its own characteristics, which may be preferred and beneficial in some specific user scenarios. With MIMO becoming more available on the IoT devices, an advanced clustering solution for highly dynamic IoT systems is missing and also pressingly demanded in order to cater for differing user applications. In this paper, we present a smart clustering algorithm (Smart-BEEM) based on our former work BEE(M) to accomplish energy efficient and Quality of user Experience (QoE) supported communication in cluster based IoT networks. It is a user behaviour and context aware approach, aiming to facilitate IoT devices to choose beneficial communication interfaces and cluster headers for data transmission. Experimental results have proved that Smart-BEEM can further improve the performance of BEE and BEEM for coverage sensitive longevity. PMID:28678164

  19. Facilitating mathematics learning for students with upper extremity disabilities using touch-input system.

    PubMed

    Choi, Kup-Sze; Chan, Tak-Yin

    2015-03-01

    To investigate the feasibility of using tablet device as user interface for students with upper extremity disabilities to input mathematics efficiently into computer. A touch-input system using tablet device as user interface was proposed to assist these students to write mathematics. User-switchable and context-specific keyboard layouts were designed to streamline the input process. The system could be integrated with conventional computer systems only with minor software setup. A two-week pre-post test study involving five participants was conducted to evaluate the performance of the system and collect user feedback. The mathematics input efficiency of the participants was found to improve during the experiment sessions. In particular, their performance in entering trigonometric expressions by using the touch-input system was significantly better than that by using conventional mathematics editing software with keyboard and mouse. The participants rated the touch-input system positively and were confident that they could operate at ease with more practice. The proposed touch-input system provides a convenient way for the students with hand impairment to write mathematics and has the potential to facilitate their mathematics learning. Implications for Rehabilitation Students with upper extremity disabilities often face barriers to learning mathematics which is largely based on handwriting. Conventional computer user interfaces are inefficient for them to input mathematics into computer. A touch-input system with context-specific and user-switchable keyboard layouts was designed to improve the efficiency of mathematics input. Experimental results and user feedback suggested that the system has the potential to facilitate mathematics learning for the students.

  20. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases.

    PubMed

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-04-15

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.

  1. Virtual reality interface devices in the reorganization of neural networks in the brain of patients with neurological diseases

    PubMed Central

    Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo

    2014-01-01

    Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907

  2. Processing module operating methods, processing modules, and communications systems

    DOEpatents

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  3. Ubiquitous computing to support co-located clinical teams: using the semiotics of physical objects in system design.

    PubMed

    Bang, Magnus; Timpka, Toomas

    2007-06-01

    Co-located teams often use material objects to communicate messages in collaboration. Modern desktop computing systems with abstract graphical user interface (GUIs) fail to support this material dimension of inter-personal communication. The aim of this study is to investigate how tangible user interfaces can be used in computer systems to better support collaborative routines among co-located clinical teams. The semiotics of physical objects used in team collaboration was analyzed from data collected during 1 month of observations at an emergency room. The resulting set of communication patterns was used as a framework when designing an experimental system. Following the principles of augmented reality, physical objects were mapped into a physical user interface with the goal of maintaining the symbolic value of those objects. NOSTOS is an experimental ubiquitous computing environment that takes advantage of interaction devices integrated into the traditional clinical environment, including digital pens, walk-up displays, and a digital desk. The design uses familiar workplace tools to function as user interfaces to the computer in order to exploit established cognitive and collaborative routines. Paper-based tangible user interfaces and digital desks are promising technologies for co-located clinical teams. A key issue that needs to be solved before employing such solutions in practice is associated with limited feedback from the passive paper interfaces.

  4. Device Independent Layout and Style Editing Using Multi-Level Style Sheets

    NASA Astrophysics Data System (ADS)

    Dees, Walter

    This paper describes a layout and styling framework that is based on the multi-level style sheets approach. It shows some of the techniques that can be used to add layout and style information to a UI in a device-independent manner, and how to reuse the layout and style information to create user interfaces for different devices

  5. Discriminating Tissue Stiffness with a Haptic Catheter: Feeling the Inside of the Beating Heart

    PubMed Central

    Kesner, Samuel B.; Howe, Robert D.

    2011-01-01

    Catheter devices allow physicians to access the inside of the human body easily and painlessly through natural orifices and vessels. Although catheters allow for the delivery of fluids and drugs, the deployment of devices, and the acquisition of the measurements, they do not allow clinicians to assess the physical properties of tissue inside the body due to the tissue motion and transmission limitations of the catheter devices, including compliance, friction, and backlash. The goal of this research is to increase the tactile information available to physicians during catheter procedures by providing haptic feedback during palpation procedures. To accomplish this goal, we have developed the first motion compensated actuated catheter system that enables haptic perception of fast moving tissue structures. The actuated catheter is instrumented with a distal tip force sensor and a force feedback interface that allows users to adjust the position of the catheter while experiencing the forces on the catheter tip. The efficacy of this device and interface is evaluated through a psychophyisical study comparing how accurately users can differentiate various materials attached to a cardiac motion simulator using the haptic device and a conventional manual catheter. The results demonstrate that haptics improves a user's ability to differentiate material properties and decreases the total number of errors by 50% over the manual catheter system. PMID:25285321

  6. Mental rotation of tactile stimuli: using directional haptic cues in mobile devices.

    PubMed

    Gleeson, Brian T; Provancher, William R

    2013-01-01

    Haptic interfaces have the potential to enrich users' interactions with mobile devices and convey information without burdening the user's visual or auditory attention. Haptic stimuli with directional content, for example, navigational cues, may be difficult to use in handheld applications; the user's hand, where the cues are delivered, may not be aligned with the world, where the cues are to be interpreted. In such a case, the user would be required to mentally transform the stimuli between different reference frames. We examine the mental rotation of directional haptic stimuli in three experiments, investigating: 1) users' intuitive interpretation of rotated stimuli, 2) mental rotation of haptic stimuli about a single axis, and 3) rotation about multiple axes and the effects of specific hand poses and joint rotations. We conclude that directional haptic stimuli are suitable for use in mobile applications, although users do not naturally interpret rotated stimuli in any one universal way. We find evidence of cognitive processes involving the rotation of analog, spatial representations and discuss how our results fit into the larger body of mental rotation research. For small angles (e.g., less than 40 degree), these mental rotations come at little cost, but rotations with larger misalignment angles impact user performance. When considering the design of a handheld haptic device, our results indicate that hand pose must be carefully considered, as certain poses increase the difficulty of stimulus interpretation. Generally, all tested joint rotations impact task difficulty, but finger flexion and wrist rotation interact to greatly increase the cost of stimulus interpretation; such hand poses should be avoided when designing a haptic interface.

  7. AsTeRICS.

    PubMed

    Drajsajtl, Tomáš; Struk, Petr; Bednárová, Alice

    2013-01-01

    AsTeRICS - "The Assistive Technology Rapid Integration & Construction Set" is a construction set for assistive technologies which can be adapted to the motor abilities of end-users. AsTeRICS allows access to different devices such as PCs, cell phones and smart home devices, with all of them integrated in a platform adapted as much as possible to each user. People with motor disabilities in the upper limbs, with no cognitive impairment, no perceptual limitations (neither visual nor auditory) and with basic skills in using technologies such as PCs, cell phones, electronic agendas, etc. have available a flexible and adaptable technology which enables them to access the Human-Machine-Interfaces (HMI) on the standard desktop and beyond. AsTeRICS provides graphical model design tools, a middleware and hardware support for the creation of tailored AT-solutions involving bioelectric signal acquisition, Brain-/Neural Computer Interfaces, Computer-Vision techniques and standardized actuator and device controls and allows combining several off-the-shelf AT-devices in every desired combination. Novel, end-user ready solutions can be created and adapted via a graphical editor without additional programming efforts. The AsTeRICS open-source framework provides resources for utilization and extension of the system to developers and researches. AsTeRICS was developed by the AsTeRICS project and was partially funded by EC.

  8. A P300-based brain-computer interface aimed at operating electronic devices at home for severely disabled people.

    PubMed

    Corralejo, Rebeca; Nicolás-Alonso, Luis F; Alvarez, Daniel; Hornero, Roberto

    2014-10-01

    The present study aims at developing and assessing an assistive tool for operating electronic devices at home by means of a P300-based brain-computer interface (BCI). Fifteen severely impaired subjects participated in the study. The developed tool allows users to interact with their usual environment fulfilling their main needs. It allows for navigation through ten menus and to manage up to 113 control commands from eight electronic devices. Ten out of the fifteen subjects were able to operate the proposed tool with accuracy above 77 %. Eight out of them reached accuracies higher than 95 %. Moreover, bitrates up to 20.1 bit/min were achieved. The novelty of this study lies in the use of an environment control application in a real scenario: real devices managed by potential BCI end-users. Although impaired users might not be able to set up this system without aid of others, this study takes a significant step to evaluate the degree to which such populations could eventually operate a stand-alone system. Our results suggest that neither the type nor the degree of disability is a relevant issue to suitably operate a P300-based BCI. Hence, it could be useful to assist disabled people at home improving their personal autonomy.

  9. An Ambient Intelligence Framework for the Provision of Geographically Distributed Multimedia Content to Mobility Impaired Users

    NASA Astrophysics Data System (ADS)

    Kehagias, Dionysios D.; Giakoumis, Dimitris; Tzovaras, Dimitrios; Bekiaris, Evangelos; Wiethoff, Marion

    This chapter presents an ambient intelligence framework whose goal is to facilitate the information needs of mobility impaired users on the move. This framework couples users with geographically distributed services and the corresponding multimedia content, enabling access to context-sensitive information based on user geographic location and the use case under consideration. It provides a multi-modal facility that is realized through a set of mobile devices and user interfaces that address the needs of ten different types of user impairments. The overall ambient intelligence framework enables users who are equipped with mobile devices to access multimedia content in order to undertake activities relevant to one or more of the following domains: transportation, tourism and leisure, personal support services, work, business, education, social relations and community building. User experience is being explored against those activities through a specific usage scenario.

  10. Interaction devices for hands-on desktop design

    NASA Astrophysics Data System (ADS)

    Ju, Wendy; Madsen, Sally; Fiene, Jonathan; Bolas, Mark T.; McDowall, Ian E.; Faste, Rolf

    2003-05-01

    Starting with a list of typical hand actions - such as touching or twisting - a collection of physical input device prototypes was created to study better ways of engaging the body and mind in the computer aided design process. These devices were interchangeably coupled with a graphics system to allow for rapid exploration of the interplay between the designer's intent, body motions, and the resulting on-screen design. User testing showed that a number of key considerations should influence the future development of such devices: coupling between the physical and virtual worlds, tactile feedback, and scale. It is hoped that these explorations contribute to the greater goal of creating user interface devices that increase the fluency, productivity and joy of computer-augmented design.

  11. Remapping residual coordination for controlling assistive devices and recovering motor functions

    PubMed Central

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias; Mussa-Ivaldi, Ferdinando A.; Casadio, Maura

    2015-01-01

    The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any single well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human–machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user’s residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. PMID:26341935

  12. Status of the Superconducting Insertion Device Control at TLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, K. H.; Wang, C. J.; Lee, Demi

    2007-01-19

    Superconducting insertion devices are installed at Taiwan Light Source to meet the rapidly growing demand of X-ray users. A control system supports the operation of all these superconducting insertion devices. Control system coordinates the operation of the main power supply and the trimming power supply to charge/discharge the magnet and provide essential interlock protection for the coils and vacuum ducts. Quench protection and various cryogenic interlocks are designed to prevent damage to the magnet. A friendly user interface supports routine operation. Various applications are also developed to aid the operation of these insertion devices. Design consideration and details of themore » implementation will be summarized in this report.« less

  13. Partitioning of Function in a Distributed Graphics System.

    DTIC Science & Technology

    1985-03-01

    Interface specification ( VDI ) is yet another graphi:s standardization effort of ANSI committee X31133 [7]. As shown in figure 2-2, the Virtual Device... VDI specification could be realized in a real device, or at least a "black box" which the user treats as a hardware device. ’he device drivers would...be written by the manufacturer of the graphics device, instead of the author of the graphics system. Since the VDI specification is precisely defined

  14. A Mobile Food Record For Integrated Dietary Assessment*

    PubMed Central

    Ahmad, Ziad; Kerr, Deborah A.; Bosch, Marc; Boushey, Carol J.; Delp, Edward J.; Khanna, Nitin; Zhu, Fengqing

    2017-01-01

    This paper presents an integrated dietary assessment system based on food image analysis that uses mobile devices or smartphones. We describe two components of our integrated system: a mobile application and an image-based food nutrient database that is connected to the mobile application. An easy-to-use mobile application user interface is described that was designed based on user preferences as well as the requirements of the image analysis methods. The user interface is validated by user feedback collected from several studies. Food nutrient and image databases are also described which facilitates image-based dietary assessment and enable dietitians and other healthcare professionals to monitor patients dietary intake in real-time. The system has been tested and validated in several user studies involving more than 500 users who took more than 60,000 food images under controlled and community-dwelling conditions. PMID:28691119

  15. Portable Handheld Optical Window Inspection Device

    NASA Technical Reports Server (NTRS)

    Ihlefeld, Curtis; Dokos, Adam; Burns, Bradley

    2010-01-01

    The Portable Handheld Optical Window Inspection Device (PHOWID) is a measurement system for imaging small defects (scratches, pits, micrometeor impacts, and the like) in the field. Designed primarily for window inspection, PHOWID attaches to a smooth surface with suction cups, and raster scans a small area with an optical pen in order to provide a three-dimensional image of the defect. PHOWID consists of a graphical user interface, motor control subsystem, scanning head, and interface electronics, as well as an integrated camera and user display that allows a user to locate minute defects before scanning. Noise levels are on the order of 60 in. (1.5 m). PHOWID allows field measurement of defects that are usually done in the lab. It is small, light, and attaches directly to the test article in any orientation up to vertical. An operator can scan a defect and get useful engineering data in a matter of minutes. There is no need to make a mold impression for later lab analysis.

  16. Design and implementation of a portal for the medical equipment market: MEDICOM.

    PubMed

    Palamas, S; Kalivas, D; Panou-Diamandi, O; Zeelenberg, C; van Nimwegen, C

    2001-01-01

    The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers Web sites with itself. The network of the Portal and the connected manufacturers sites is called the MEDICOM system. To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system s databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM s functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support.

  17. Design and Implementation of a Portal for the Medical Equipment Market: MEDICOM

    PubMed Central

    Kalivas, Dimitris; Panou-Diamandi, Ourania; Zeelenberg, Cees; van Nimwegen, Chris

    2001-01-01

    Background The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers' Web sites with itself. The network of the Portal and the connected manufacturers' sites is called the MEDICOM system. Objective To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). Methods The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers' servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system's databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers' servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM's functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. Results The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. Conclusions The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support. PMID:11772547

  18. Haptic interface of web-based training system for interventional radiology procedures

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Lu, Yiping; Loe, KiaFock; Nowinski, Wieslaw L.

    2004-05-01

    The existing web-based medical training systems and surgical simulators can provide affordable and accessible medical training curriculum, but they seldom offer the trainee realistic and affordable haptic feedback. Therefore, they cannot offer the trainee a suitable practicing environment. In this paper, a haptic solution for interventional radiology (IR) procedures is proposed. System architecture of a web-based training system for IR procedures is briefly presented first. Then, the mechanical structure, the working principle and the application of a haptic device are discussed in detail. The haptic device works as an interface between the training environment and the trainees and is placed at the end user side. With the system, the user can be trained on the interventional radiology procedures - navigating catheters, inflating balloons, deploying coils and placing stents on the web and get surgical haptic feedback in real time.

  19. Embedded Control System for Smart Walking Assistance Device.

    PubMed

    Bosnak, Matevz; Skrjanc, Igor

    2017-03-01

    This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.

  20. An Accessible User Interface for Geoscience and Programming

    NASA Astrophysics Data System (ADS)

    Sevre, E. O.; Lee, S.

    2012-12-01

    The goal of this research is to develop an interface that will simplify user interaction with software for scientists. The motivating factor of the research is to develop tools that assist scientists with limited motor skills with the efficient generation and use of software tools. Reliance on computers and programming is increasing in the world of geology, and it is increasingly important for geologists and geophysicists to have the computational resources to use advanced software and edit programs for their research. I have developed a prototype of a program to help geophysicists write programs using a simple interface that requires only simple single-mouse-clicks to input code. It is my goal to minimize the amount of typing necessary to create simple programs and scripts to increase accessibility for people with disabilities limiting fine motor skills. This interface can be adapted for various programming and scripting languages. Using this interface will simplify development of code for C/C++, Java, and GMT, and can be expanded to support any other text based programming language. The interface is designed around the concept of maximizing the amount of code that can be written using a minimum number of clicks and typing. The screen is split into two sections: a list of click-commands is on the left hand side, and a text area is on the right hand side. When the user clicks on a command on the left hand side the applicable code is automatically inserted at the insertion point in the text area. Currently in the C/C++ interface, there are commands for common code segments that are often used, such as for loops, comments, print statements, and structured code creation. The primary goal is to provide an interface that will work across many devices for developing code. A simple prototype has been developed for the iPad. Due to the limited number of devices that an iOS application can be used with, the code has been re-written in Java to run on a wider range of devices. Currently, the software works in a prototype mode, and it is our goal to further development to create software that can benefit a wide range of people working in geosciences, which will make code development practical and accessible for a wider audience of scientists. By using an interface like this, it reduces potential for errors by reusing known working code.

  1. Physical interface dynamics alter how robotic exosuits augment human movement: implications for optimizing wearable assistive devices.

    PubMed

    Yandell, Matthew B; Quinlivan, Brendan T; Popov, Dmitry; Walsh, Conor; Zelik, Karl E

    2017-05-18

    Wearable assistive devices have demonstrated the potential to improve mobility outcomes for individuals with disabilities, and to augment healthy human performance; however, these benefits depend on how effectively power is transmitted from the device to the human user. Quantifying and understanding this power transmission is challenging due to complex human-device interface dynamics that occur as biological tissues and physical interface materials deform and displace under load, absorbing and returning power. Here we introduce a new methodology for quickly estimating interface power dynamics during movement tasks using common motion capture and force measurements, and then apply this method to quantify how a soft robotic ankle exosuit interacts with and transfers power to the human body during walking. We partition exosuit end-effector power (i.e., power output from the device) into power that augments ankle plantarflexion (termed augmentation power) vs. power that goes into deformation and motion of interface materials and underlying soft tissues (termed interface power). We provide empirical evidence of how human-exosuit interfaces absorb and return energy, reshaping exosuit-to-human power flow and resulting in three key consequences: (i) During exosuit loading (as applied forces increased), about 55% of exosuit end-effector power was absorbed into the interfaces. (ii) However, during subsequent exosuit unloading (as applied forces decreased) most of the absorbed interface power was returned viscoelastically. Consequently, the majority (about 75%) of exosuit end-effector work over each stride contributed to augmenting ankle plantarflexion. (iii) Ankle augmentation power (and work) was delayed relative to exosuit end-effector power, due to these interface energy absorption and return dynamics. Our findings elucidate the complexities of human-exosuit interface dynamics during transmission of power from assistive devices to the human body, and provide insight into improving the design and control of wearable robots. We conclude that in order to optimize the performance of wearable assistive devices it is important, throughout design and evaluation phases, to account for human-device interface dynamics that affect power transmission and thus human augmentation benefits.

  2. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  3. Vision based interface system for hands free control of an Intelligent Wheelchair.

    PubMed

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-08-06

    Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people.

  4. Computer Lab Tools for Science: An Analysis of Commercially Available Science Interfacing Software for Microcomputers. A Quarterly Report.

    ERIC Educational Resources Information Center

    Weaver, Dave

    Science interfacing packages (also known as microcomputer-based laboratories or probeware) generally consist of a set of programs on disks, a user's manual, and hardware which includes one or more sensory devices. Together with a microcomputer they combine to make a powerful data acquisition and analysis tool. Packages are available for accurately…

  5. Human Factors Approach to Comparative Usability of Hospital Manual Defibrillators.

    PubMed

    Fidler, Richard; Johnson, Meshell

    2016-04-01

    Equipment-related issues have recently been cited as a significant contributor to the suboptimal outcomes of resuscitation management. A systematic evaluation of the human-device interface was undertaken to evaluate the intuitive nature of three different defibrillators. Devices tested were the Physio-Control LifePak 15, the Zoll R Series Plus, and the Philips MRx. A convenience sample of 73 multidisciplinary health care providers from 5 different hospitals participated in this study. All subjects' performances were evaluated without any training on the devices being studied to assess the intuitiveness of the user interface to perform the functions of delivering an Automated External Defibrillator (AED) shock, a manual defibrillation, pacing to achieve 100% capture, and synchronized cardioversion on a rhythm simulator. Times to deliver an AED shock were fastest with the Zoll, whereas the Philips had the fastest times to deliver a manual defibrillation. Subjects took the least time to attain 100% capture for pacing with the Physio-Control device. No differences in performance times were seen with synchronized cardioversion among the devices. Human factors issues uncovered during this study included a preference for knobs over soft keys and a desire for clarity in control panel design. This study demonstrated no clearly superior defibrillator, as each of the models exhibited strengths in different areas. When asked their defibrillator preference, 67% of subjects chose the Philips. This comparison of user interfaces of defibrillators in simulated situations allows the assessment of usability that can provide manufacturers and educators with feedback about defibrillator implementation for these critical care devices. Published by Elsevier Ireland Ltd.

  6. Communications interface for wireless communications headset

    NASA Technical Reports Server (NTRS)

    Culotta, Jr., Anthony Joseph (Inventor); Seibert, Marc A. (Inventor)

    2004-01-01

    A universal interface adapter circuit interfaces, for example, a wireless communications headset with any type of communications system, including those that require push-to-talk (PTT) signaling. The interface adapter is comprised of several main components, including an RF signaling receiver, a microcontroller and associated circuitry for decoding and processing the received signals, and programmable impedance matching and line interfacing circuitry for interfacing a wireless communications headset system base to a communications system. A signaling transmitter, which is preferably portable (e.g., handheld), is employed by the wireless headset user to send signals to the signaling receiver. In an embodiment of the invention directed specifically to push-to-talk (PTT) signaling, the wireless headset user presses a button on the signaling transmitter when they wish to speak. This sends a signal to the microcontroller which decodes the signal and recognizes the signal as being a PTT request. In response, the microcontroller generates a control signal that closes a switch to complete a voice connection between the headset system base and the communications system so that the user can communicate with the communications system. With this arrangement, the wireless headset can be interfaced to any communications system that requires PTT signaling, without modification of the headset device. In addition, the interface adapter can also be configured to respond to or deliver any other types of signals, such as dual-tone-multiple-frequency (DTMF) tones, and on/off hook signals. The present invention is also scalable, and permits multiple wireless users to operate independently in the same environment through use of a plurality of the interface adapters.

  7. A personal digital assistant application (MobilDent) for dental fieldwork data collection, information management and database handling.

    PubMed

    Forsell, M; Häggström, M; Johansson, O; Sjögren, P

    2008-11-08

    To develop a personal digital assistant (PDA) application for oral health assessment fieldwork, including back-office and database systems (MobilDent). System design, construction and implementation of PDA, back-office and database systems. System requirements for MobilDent were collected, analysed and translated into system functions. User interfaces were implemented and system architecture was outlined. MobilDent was based on a platform with. NET (Microsoft) components, using an SQL Server 2005 (Microsoft) for data storage with Windows Mobile (Microsoft) operating system. The PDA devices were Dell Axim. System functions and user interfaces were specified for MobilDent. User interfaces for PDA, back-office and database systems were based on. NET programming. The PDA user interface was based on Windows suitable to a PDA display, whereas the back-office interface was designed for a normal-sized computer screen. A synchronisation module (MS Active Sync, Microsoft) was used to enable download of field data from PDA to the database. MobilDent is a feasible application for oral health assessment fieldwork, and the oral health assessment database may prove a valuable source for care planning, educational and research purposes. Further development of the MobilDent system will include wireless connectivity with download-on-demand technology.

  8. The Effects of GBL and Learning Styles on Chinese Idiom by Using TUI Device

    ERIC Educational Resources Information Center

    Ku, D. T.; Huang, Y.-H.; Hus, S. C.

    2015-01-01

    This study investigated how the integration of a game-based learning strategy and a tangible user interface (TUI) device improves the learning achievement of fifth-grade students in studying Chinese idioms. By using the sifting, and sorting, features of Sifteo Cubes, learners, via a gaming situation, manually composed the cubes to the correct…

  9. Virtual workstations and telepresence interfaces: Design accommodations and prototypes for Space Station Freedom evolution

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1990-01-01

    An advanced human-system interface is being developed for evolutionary Space Station Freedom as part of the NASA Office of Space Station (OSS) Advanced Development Program. The human-system interface is based on body-pointed display and control devices. The project will identify and document the design accommodations ('hooks and scars') required to support virtual workstations and telepresence interfaces, and prototype interface systems will be built, evaluated, and refined. The project is a joint enterprise of Marquette University, Astronautics Corporation of America (ACA), and NASA's ARC. The project team is working with NASA's JSC and McDonnell Douglas Astronautics Company (the Work Package contractor) to ensure that the project is consistent with space station user requirements and program constraints. Documentation describing design accommodations and tradeoffs will be provided to OSS, JSC, and McDonnell Douglas, and prototype interface devices will be delivered to ARC and JSC. ACA intends to commercialize derivatives of the interface for use with computer systems developed for scientific visualization and system simulation.

  10. Evaluation of a wireless wearable tongue–computer interface by individuals with high-level spinal cord injuries

    PubMed Central

    Huo, Xueliang; Ghovanloo, Maysam

    2010-01-01

    The tongue drive system (TDS) is an unobtrusive, minimally invasive, wearable and wireless tongue–computer interface (TCI), which can infer its users' intentions, represented in their volitional tongue movements, by detecting the position of a small permanent magnetic tracer attached to the users' tongues. Any specific tongue movements can be translated into user-defined commands and used to access and control various devices in the users' environments. The latest external TDS (eTDS) prototype is built on a wireless headphone and interfaced to a laptop PC and a powered wheelchair. Using customized sensor signal processing algorithms and graphical user interface, the eTDS performance was evaluated by 13 naive subjects with high-level spinal cord injuries (C2–C5) at the Shepherd Center in Atlanta, GA. Results of the human trial show that an average information transfer rate of 95 bits/min was achieved for computer access with 82% accuracy. This information transfer rate is about two times higher than the EEG-based BCIs that are tested on human subjects. It was also demonstrated that the subjects had immediate and full control over the powered wheelchair to the extent that they were able to perform complex wheelchair navigation tasks, such as driving through an obstacle course. PMID:20332552

  11. Development of an imaging method for quantifying a large digital PCR droplet

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang

    2017-02-01

    Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.

  12. Brain-computer interface technology: a review of the Second International Meeting.

    PubMed

    Vaughan, Theresa M; Heetderks, William J; Trejo, Leonard J; Rymer, William Z; Weinrich, Michael; Moore, Melody M; Kübler, Andrea; Dobkin, Bruce H; Birbaumer, Niels; Donchin, Emanuel; Wolpaw, Elizabeth Winter; Wolpaw, Jonathan R

    2003-06-01

    This paper summarizes the Brain-Computer Interfaces for Communication and Control, The Second International Meeting, held in Rensselaerville, NY, in June 2002. Sponsored by the National Institutes of Health and organized by the Wadsworth Center of the New York State Department of Health, the meeting addressed current work and future plans in brain-computer interface (BCI) research. Ninety-two researchers representing 38 different research groups from the United States, Canada, Europe, and China participated. The BCIs discussed at the meeting use electroencephalographic activity recorded from the scalp or single-neuron activity recorded within cortex to control cursor movement, select letters or icons, or operate neuroprostheses. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI that recognizes the commands contained in the input and expresses them in device control. Current BCIs have maximum information transfer rates of up to 25 b/min. Achievement of greater speed and accuracy requires improvements in signal acquisition and processing, in translation algorithms, and in user training. These improvements depend on interdisciplinary cooperation among neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective criteria for evaluating alternative methods. The practical use of BCI technology will be determined by the development of appropriate applications and identification of appropriate user groups, and will require careful attention to the needs and desires of individual users.

  13. Universal programming interface with concurrent access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alferov, Oleg

    2004-10-07

    There exist a number of devices with a positioning nature of operation, such as mechanical linear stages, temperature controllers, or filterwheels with discrete state, and most of them have different programming interfaces. The Universal Positioner software suggests the way to handle all of them is with a single approach, whereby a particular hardware driver is created from the template and by translating the actual commands used by the hardware to and from the universal programming interface. The software contains the universal API module itself, the demo simulation of hardware, and the front-end programs to help developers write their own softwaremore » drivers along with example drivers for actual hardware controllers. The software allows user application programs to call devices simultaneously without race conditions (multitasking and concurrent access). The template suggested in this package permits developers to integrate various devices easily into their applications using the same API. The drivers can be stacked; i.e., they can call each other via the same interface.« less

  14. DeviceEditor visual biological CAD canvas

    PubMed Central

    2012-01-01

    Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390

  15. Improvement of design of a surgical interface using an eye tracking device

    PubMed Central

    2014-01-01

    Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176

  16. Improvement of design of a surgical interface using an eye tracking device.

    PubMed

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.

  17. A Hardware Platform for Tuning of MEMS Devices Using Closed-Loop Frequency Response

    NASA Technical Reports Server (NTRS)

    Ferguson, Michael I.; MacDonald, Eric; Foor, David

    2005-01-01

    We report on the development of a hardware platform for integrated tuning and closed-loop operation of MEMS gyroscopes. The platform was developed and tested for the second generation JPL/Boeing Post-Resonator MEMS gyroscope. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). A software interface allows the user to configure, calibrate, and tune the bias voltages on the micro-gyro. The interface easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  18. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  19. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  20. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  1. Characterizing Graphene-modified Electrodes for Interfacing with Arduino®-based Devices.

    PubMed

    Arris, Farrah Aida; Ithnin, Mohamad Hafiz; Salim, Wan Wardatul Amani Wan

    2016-08-01

    Portable low-cost platform and sensing systems for identification and quantitative measurement are in high demand for various environmental monitoring applications, especially in field work. Quantifying parameters in the field requires both minimal sample handling and a device capable of performing measurements with high sensitivity and stability. Furthermore, the one-device-fits-all concept is useful for continuous monitoring of multiple parameters. Miniaturization of devices can be achieved by introducing graphene as part of the transducer in an electrochemical sensor. In this project, we characterize graphene deposition methods on glassy-carbon electrodes (GCEs) with the goal of interfacing with an Arduino-based user-friendly microcontroller. We found that a galvanostatic electrochemical method yields the highest peak current of 10 mA, promising a highly sensitive electrochemical sensor. An Atlas Scientific™ printed circuit board (PCB) was connected to an Arduino® microcontroller using a multi-circuit connection that can be interfaced with graphene-based electrochemical sensors for environmental monitoring.

  2. Challenges in the Implementation of a Mobile Application in Clinical Practice: Case Study in the Context of an Application that Manages the Daily Interventions of Nurses

    PubMed Central

    Wipfli, Rolf; Teodoro, Douglas; Sarrey, Everlyne; Walesa, Magali; Lovis, Christian

    2013-01-01

    Background Working in a clinical environment requires unfettered mobility. This is especially true for nurses who are always on the move providing patients’ care in different locations. Since the introduction of clinical information systems in hospitals, this mobility has often been considered hampered by interactions with computers. The popularity of personal mobile assistants such as smartphones makes it possible to gain easy access to clinical data anywhere. Objective To identify the challenges involved in the deployment of clinical applications on handheld devices and to share our solutions to these problems. Methods A team of experts underwent an iterative development process of a mobile application prototype that aimed to improve the mobility of nurses during their daily clinical activities. Through the process, challenges inherent to mobile platforms have emerged. These issues have been classified, focusing on factors related to ensuring information safety and quality, as well as pleasant and efficient user experiences. Results The team identified five main challenges related to the deployment of clinical mobile applications and presents solutions to overcome each of them: (1) Financial: Equipping every care giver with a new mobile device requires substantial investment that can be lowered if users use their personal device instead, (2) Hardware: The constraints inherent to the clinical environment made us choose the mobile device with the best tradeoff between size and portability, (3) Communication: the connection of the mobile application with any existing clinical information systems (CIS) is insured by a bridge formatting the information appropriately, (4) Security: In order to guarantee the confidentiality and safety of the data, the amount of data stored on the device is minimized, and (5) User interface: The design of our user interface relied on homogeneity, hierarchy, and indexicality principles to prevent an increase in data acquisition errors. Conclusions The introduction of nomadic computing often raises enthusiastic reactions from users, but several challenges due to specific constraints of mobile platforms must be overcome. The ease of development of mobile applications and their rapid spread should not overshadow the real challenges of clinical applications and the potential threats for patient safety and the liability of people and organizations using them. For example, careful attention must be given to the overall architecture of the system and to user interfaces. If these precautions are not taken, it can easily lead to unexpected failures such as an increased number of input errors, loss of data, or decreased efficiency. PMID:25100680

  3. Challenges in the Implementation of a Mobile Application in Clinical Practice: Case Study in the Context of an Application that Manages the Daily Interventions of Nurses.

    PubMed

    Ehrler, Frederic; Wipfli, Rolf; Teodoro, Douglas; Sarrey, Everlyne; Walesa, Magali; Lovis, Christian

    2013-06-12

    Working in a clinical environment requires unfettered mobility. This is especially true for nurses who are always on the move providing patients' care in different locations. Since the introduction of clinical information systems in hospitals, this mobility has often been considered hampered by interactions with computers. The popularity of personal mobile assistants such as smartphones makes it possible to gain easy access to clinical data anywhere. To identify the challenges involved in the deployment of clinical applications on handheld devices and to share our solutions to these problems. A team of experts underwent an iterative development process of a mobile application prototype that aimed to improve the mobility of nurses during their daily clinical activities. Through the process, challenges inherent to mobile platforms have emerged. These issues have been classified, focusing on factors related to ensuring information safety and quality, as well as pleasant and efficient user experiences. The team identified five main challenges related to the deployment of clinical mobile applications and presents solutions to overcome each of them: (1) Financial: Equipping every care giver with a new mobile device requires substantial investment that can be lowered if users use their personal device instead, (2) Hardware: The constraints inherent to the clinical environment made us choose the mobile device with the best tradeoff between size and portability, (3) Communication: the connection of the mobile application with any existing clinical information systems (CIS) is insured by a bridge formatting the information appropriately, (4) Security: In order to guarantee the confidentiality and safety of the data, the amount of data stored on the device is minimized, and (5) User interface: The design of our user interface relied on homogeneity, hierarchy, and indexicality principles to prevent an increase in data acquisition errors. The introduction of nomadic computing often raises enthusiastic reactions from users, but several challenges due to specific constraints of mobile platforms must be overcome. The ease of development of mobile applications and their rapid spread should not overshadow the real challenges of clinical applications and the potential threats for patient safety and the liability of people and organizations using them. For example, careful attention must be given to the overall architecture of the system and to user interfaces. If these precautions are not taken, it can easily lead to unexpected failures such as an increased number of input errors, loss of data, or decreased efficiency.

  4. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  5. Classifying BCI signals from novice users with extreme learning machine

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.

    2017-07-01

    Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.

  6. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M [Albuquerque, NM; Riblett, Jr., Loren E.; Green, Karl L [Albuquerque, NM; Hunter, John A [Albuquerque, NM; Cook, III, Robert N.; Stevens, James R [Arlington, VA

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  7. User-interactive electronic skin for instantaneous pressure visualization

    NASA Astrophysics Data System (ADS)

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components—thin-film transistor, pressure sensor and OLED arrays—are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  8. User-interactive electronic skin for instantaneous pressure visualization.

    PubMed

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components--thin-film transistor, pressure sensor and OLED arrays--are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  9. The Device Centric Communication System for 5G Networks

    NASA Astrophysics Data System (ADS)

    Biswash, S. K.; Jayakody, D. N. K.

    2017-01-01

    The Fifth Generation Communication (5G) networks have several functional features such as: Massive Multiple Input and Multiple Output (MIMO), Device centric data and voice support, Smarter-device communications, etc. The objective for 5G networks is to gain the 1000x more throughput, 10x spectral efficiency, 100 x more energy efficiency than existing technologies. The 5G system will provide the balance between the Quality of Experience (QoE) and the Quality of Service (QoS), without compromising the user benefit. The data rate has been the key metric for wireless QoS; QoE deals with the delay and throughput. In order to realize a balance between the QoS and QoE, we propose a cellular Device centric communication methodology for the overlapping network coverage area in the 5G communication system. The multiple beacon signals mobile tower refers to an overlapping network area, and a user must be forwarded to the next location area. To resolve this issue, we suggest the user centric methodology (without Base Station interface) to handover the device in the next area, until the users finalize the communication. The proposed method will reduce the signalling cost and overheads for the communication.

  10. Development of Web Interfaces for Analysis Codes

    NASA Astrophysics Data System (ADS)

    Emoto, M.; Watanabe, T.; Funaba, H.; Murakami, S.; Nagayama, Y.; Kawahata, K.

    Several codes have been developed to analyze plasma physics. However, most of them are developed to run on supercomputers. Therefore, users who typically use personal computers (PCs) find it difficult to use these codes. In order to facilitate the widespread use of these codes, a user-friendly interface is required. The authors propose Web interfaces for these codes. To demonstrate the usefulness of this approach, the authors developed Web interfaces for two analysis codes. One of them is for FIT developed by Murakami. This code is used to analyze the NBI heat deposition, etc. Because it requires electron density profiles, electron temperatures, and ion temperatures as polynomial expressions, those unfamiliar with the experiments find it difficult to use this code, especially visitors from other institutes. The second one is for visualizing the lines of force in the LHD (large helical device) developed by Watanabe. This code is used to analyze the interference caused by the lines of force resulting from the various structures installed in the vacuum vessel of the LHD. This code runs on PCs; however, it requires that the necessary parameters be edited manually. Using these Web interfaces, users can execute these codes interactively.

  11. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    NASA Astrophysics Data System (ADS)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  12. ECCE Toolkit: Prototyping Sensor-Based Interaction.

    PubMed

    Bellucci, Andrea; Aedo, Ignacio; Díaz, Paloma

    2017-02-23

    Building and exploring physical user interfaces requires high technical skills and hours of specialized work. The behavior of multiple devices with heterogeneous input/output channels and connectivity has to be programmed in a context where not only the software interface matters, but also the hardware components are critical (e.g., sensors and actuators). Prototyping physical interaction is hindered by the challenges of: (1) programming interactions among physical sensors/actuators and digital interfaces; (2) implementing functionality for different platforms in different programming languages; and (3) building custom electronic-incorporated objects. We present ECCE (Entities, Components, Couplings and Ecosystems), a toolkit for non-programmers that copes with these issues by abstracting from low-level implementations, thus lowering the complexity of prototyping small-scale, sensor-based physical interfaces to support the design process. A user evaluation provides insights and use cases of the kind of applications that can be developed with the toolkit.

  13. An adaptive brain actuated system for augmenting rehabilitation

    PubMed Central

    Roset, Scott A.; Gant, Katie; Prasad, Abhishek; Sanchez, Justin C.

    2014-01-01

    For people living with paralysis, restoration of hand function remains the top priority because it leads to independence and improvement in quality of life. In approaches to restore hand and arm function, a goal is to better engage voluntary control and counteract maladaptive brain reorganization that results from non-use. Standard rehabilitation augmented with developments from the study of brain-computer interfaces could provide a combined therapy approach for motor cortex rehabilitation and to alleviate motor impairments. In this paper, an adaptive brain-computer interface system intended for application to control a functional electrical stimulation (FES) device is developed as an experimental test bed for augmenting rehabilitation with a brain-computer interface. The system's performance is improved throughout rehabilitation by passive user feedback and reinforcement learning. By continuously adapting to the user's brain activity, similar adaptive systems could be used to support clinical brain-computer interface neurorehabilitation over multiple days. PMID:25565945

  14. ECCE Toolkit: Prototyping Sensor-Based Interaction

    PubMed Central

    Bellucci, Andrea; Aedo, Ignacio; Díaz, Paloma

    2017-01-01

    Building and exploring physical user interfaces requires high technical skills and hours of specialized work. The behavior of multiple devices with heterogeneous input/output channels and connectivity has to be programmed in a context where not only the software interface matters, but also the hardware components are critical (e.g., sensors and actuators). Prototyping physical interaction is hindered by the challenges of: (1) programming interactions among physical sensors/actuators and digital interfaces; (2) implementing functionality for different platforms in different programming languages; and (3) building custom electronic-incorporated objects. We present ECCE (Entities, Components, Couplings and Ecosystems), a toolkit for non-programmers that copes with these issues by abstracting from low-level implementations, thus lowering the complexity of prototyping small-scale, sensor-based physical interfaces to support the design process. A user evaluation provides insights and use cases of the kind of applications that can be developed with the toolkit. PMID:28241502

  15. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    PubMed

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence.

  16. 10 CFR Appendix B to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... factor. 1.2 “Anti-sweat heater” means a device incorporated into the design of a freezer to prevent the accumulation of moisture on exterior or interior surfaces of the cabinet. 1.3 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat...

  17. Visual Analytics for Law Enforcement: Deploying a Service-Oriented Analytic Framework for Web-based Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.

    2009-04-14

    This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less

  18. Simulation of a sensor array for multiparameter measurements at the prosthetic limb interface

    NASA Astrophysics Data System (ADS)

    Rowe, Gabriel I.; Mamishev, Alexander V.

    2004-07-01

    Sensitive skin is a highly desired device for biomechanical devices, wearable computing, human-computer interfaces, exoskeletons, and, most pertinent to this paper, for lower limb prosthetics. The measurement of shear stress is very important because shear effects are key factors in developing surface abrasions and pressure sores in paraplegics and users of prosthetic/orthotic devices. A single element of a sensitive skin is simulated and characterized in this paper. Conventional tactile sensors are designed for measurement of the normal stress only, which is inadequate for comprehensive assessment of surface contact conditions. The sensitive skin discussed here is a flexible array capable of sensing shear and normal forces, as well as humidity and temperature on each element.

  19. Wearable ear EEG for brain interfacing

    NASA Astrophysics Data System (ADS)

    Schroeder, Eric D.; Walker, Nicholas; Danko, Amanda S.

    2017-02-01

    Brain-computer interfaces (BCIs) measuring electrical activity via electroencephalogram (EEG) have evolved beyond clinical applications to become wireless consumer products. Typically marketed for meditation and neu- rotherapy, these devices are limited in scope and currently too obtrusive to be a ubiquitous wearable. Stemming from recent advancements made in hearing aid technology, wearables have been shrinking to the point that the necessary sensors, circuitry, and batteries can be fit into a small in-ear wearable device. In this work, an ear-EEG device is created with a novel system for artifact removal and signal interpretation. The small, compact, cost-effective, and discreet device is demonstrated against existing consumer electronics in this space for its signal quality, comfort, and usability. A custom mobile application is developed to process raw EEG from each device and display interpreted data to the user. Artifact removal and signal classification is accomplished via a combination of support matrix machines (SMMs) and soft thresholding of relevant statistical properties.

  20. An intensive insulinotherapy mobile phone application built on artificial intelligence techniques.

    PubMed

    Curran, Kevin; Nichols, Eric; Xie, Ermai; Harper, Roy

    2010-01-01

    Software to help control diabetes is currently an embryonic market with the main activity to date focused mainly on the development of noncomputerized solutions, such as cardboard calculators or computerized solutions that use "flat" computer models, which are applied to each person without taking into account their individual lifestyles. The development of true, mobile device-driven health applications has been hindered by the lack of tools available in the past and the sheer lack of mobile devices on the market. This has now changed, however, with the availability of pocket personal computer handsets. This article describes a solution in the form of an intelligent neural network running on mobile devices, allowing people with diabetes access to it regardless of their location. Utilizing an easy to learn and use multipanel user interface, people with diabetes can run the software in real time via an easy to use graphical user interface. The neural network consists of four neurons. The first is glucose. If the user's current glucose level is within the target range, the glucose weight is then multiplied by zero. If the glucose level is high, then there will be a positive value multiplied to the weight, resulting in a positive amount of insulin to be injected. If the user's glucose level is low, then the weights will be multiplied by a negative value, resulting in a decrease in the overall insulin dose. A minifeasibility trial was carried out at a local hospital under a consultant endocrinologist in Belfast. The short study ran for 2 weeks with six patients. The main objectives were to investigate the user interface, test the remote sending of data over a 3G network to a centralized server at the university, and record patient data for further proofing of the neural network. We also received useful feedback regarding the user interface and the feasibility of handing real-world patients a new mobile phone. Results of this short trial confirmed to a large degree that our approach (which also can be known as intensive insulinotherapy) has value and perhaps that our neural network approach has implications for future intelligent insulin pumps. Currently, there is no software available to tell people with diabetes how much insulin to inject in accordance with their lifestyle and individual inputs, which leads to adjustments in software predictions on the amount of insulin to inject. We have taken initial steps to supplement the knowledge and skills of health care professionals in controlling insulin levels on a daily basis using a mobile device for people who are less able to manage their disease, especially children and young adults. 2010 Diabetes Technology Society.

  1. An Intensive Insulinotherapy Mobile Phone Application Built on Artificial Intelligence Techniques

    PubMed Central

    Curran, Kevin; Nichols, Eric; Xie, Ermai; Harper, Roy

    2010-01-01

    Background Software to help control diabetes is currently an embryonic market with the main activity to date focused mainly on the development of noncomputerized solutions, such as cardboard calculators or computerized solutions that use “flat” computer models, which are applied to each person without taking into account their individual lifestyles. The development of true, mobile device-driven health applications has been hindered by the lack of tools available in the past and the sheer lack of mobile devices on the market. This has now changed, however, with the availability of pocket personal computer handsets. Method This article describes a solution in the form of an intelligent neural network running on mobile devices, allowing people with diabetes access to it regardless of their location. Utilizing an easy to learn and use multipanel user interface, people with diabetes can run the software in real time via an easy to use graphical user interface. The neural network consists of four neurons. The first is glucose. If the user's current glucose level is within the target range, the glucose weight is then multiplied by zero. If the glucose level is high, then there will be a positive value multiplied to the weight, resulting in a positive amount of insulin to be injected. If the user's glucose level is low, then the weights will be multiplied by a negative value, resulting in a decrease in the overall insulin dose. Results A minifeasibility trial was carried out at a local hospital under a consultant endocrinologist in Belfast. The short study ran for 2 weeks with six patients. The main objectives were to investigate the user interface, test the remote sending of data over a 3G network to a centralized server at the university, and record patient data for further proofing of the neural network. We also received useful feedback regarding the user interface and the feasibility of handing real-world patients a new mobile phone. Results of this short trial confirmed to a large degree that our approach (which also can be known as intensive insulinotherapy) has value and perhaps that our neural network approach has implications for future intelligent insulin pumps. Conclusions Currently, there is no software available to tell people with diabetes how much insulin to inject in accordance with their lifestyle and individual inputs, which leads to adjustments in software predictions on the amount of insulin to inject. We have taken initial steps to supplement the knowledge and skills of health care professionals in controlling insulin levels on a daily basis using a mobile device for people who are less able to manage their disease, especially children and young adults. PMID:20167186

  2. Culture, Interface Design, and Design Methods for Mobile Devices

    NASA Astrophysics Data System (ADS)

    Lee, Kun-Pyo

    Aesthetic differences and similarities among cultures are obviously one of the very important issues in cultural design. However, ever since products became knowledge-supporting tools, the visible elements of products have become more universal so that the invisible parts of products such as interface and interaction are getting more important. Therefore, the cultural design should be extended to the invisible elements of culture like people's conceptual models beyond material and phenomenal culture. This chapter aims to explain how we address the invisible cultural elements in interface design and design methods by exploring the users' cognitive styles and communication patterns in different cultures. Regarding cultural interface design, we examined users' conceptual models while interacting with mobile phone and website interfaces, and observed cultural difference in performing tasks and viewing patterns, which appeared to agree with cultural cognitive styles known as Holistic thoughts vs. Analytic thoughts. Regarding design methods for culture, we explored how to localize design methods such as focus group interview and generative session for specific cultural groups, and the results of comparative experiments revealed cultural difference on participants' behaviors and performance in each design method and led us to suggest how to conduct them in East Asian culture. Mobile Observation Analyzer and Wi-Pro, user research tools we invented to capture user behaviors and needs especially in their mobile context, were also introduced.

  3. Fabryq: Using Phones as Smart Proxies to Control Wearable Devices from the Web

    DTIC Science & Technology

    2014-06-12

    energy efficient, embedded low power device with a short range radio; 2) a user’s mobile phone, which shows a user interface but also acts as a router...ically relays information to a companion application running on the user’s mobile phone (or PC), which in turn communi- cates with servers that the...skills in several diverse fields. Thus, experimentation in deploy- able, mobile wearable devices is largely reserved to experts, and implementation cycles

  4. Preclinical tests of an android based dietary logging application.

    PubMed

    Kósa, István; Vassányi, István; Pintér, Balázs; Nemes, Márta; Kámánné, Krisztina; Kohut, László

    2014-01-01

    The paper describes the first, preclinical evaluation of a dietary logging application developed at the University of Pannonia, Hungary. The mobile user interface is briefly introduced. The three evaluation phases examined the completeness and contents of the dietary database and the time expenditure of the mobile based diet logging procedure. The results show that although there are substantial individual differences between various dietary databases, the expectable difference with respect to nutrient contents is below 10% on typical institutional menu list. Another important finding is that the time needed to record the meals can be reduced to about 3 minutes daily especially if the user uses set-based search. a well designed user interface on a mobile device is a viable and reliable way for a personalized lifestyle support service.

  5. Practical Issues of Wireless Mobile Devices Usage with Downlink Optimization

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona

    Mobile device makers produce tens of new complex mobile devices per year to put users a special mobile device with a possibility to do anything, anywhere, anytime. These devices can operate full scale applications with nearly the same comfort as their desktop equivalents only with several limitations. One of such limitation is insufficient download on wireless connectivity in case of the large multimedia files. Main area of paper is in a possibilities description of solving this problem as well as the test of several new mobile devices along with server interface tests and common software descriptions. New devices have a full scale of wireless connectivity which can be used not only to communication with outer land. Several such possibilities of use are described. Mobile users will have also an online connection to internet all time powered on. Internet is mainly the web pages but the web services use is still accelerate up. The paper deal also with a possibility of maximum user amounts to have a connection at same time to current server type. At last the new kind of database access - Linq technology is compare to ADO.NET in response time meaning.

  6. Design and implementation of a status at a glance user interface for a power distribution expert system

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Manner, David B.; Dolce, James L.; Mellor, Pamela A.

    1993-01-01

    Expert systems are widely used in health monitoring and fault detection applications. One of the key features of an expert system is that it possesses a large body of knowledge about the application for which it was designed. When the user consults this knowledge base, it is essential that the expert system's reasoning process and its conclusions be as concise as possible. If, in addition, an expert system is part of a process monitoring system, the expert system's conclusions must be combined with current events of the process. Under these circumstances, it is difficult for a user to absorb and respond to all the available information. For example, a user can become distracted and confused if two or more unrelated devices in different parts of the system require attention. A human interface designed to integrate expert system diagnoses with process data and to focus the user's attention to the important matters provides a solution to the 'information overload' problem. This paper will discuss a user interface to the power distribution expert system for Space Station Freedom. The importance of features which simplify assessing system status and which minimize navigating through layers of information will be discussed. Design rationale and implementation choices will also be presented.

  7. A pen-based system to support pre-operative data collection within an anaesthesia department.

    PubMed Central

    Sanz, M. F.; Gómez, E. J.; Trueba, I.; Cano, P.; Arredondo, M. T.; del Pozo, F.

    1993-01-01

    This paper describes the design and implementation of a pen-based computer system for remote preoperative data collection. The system is envisaged to be used by anaesthesia staff at different hospital scenarios where pre-operative data are generated. Pen-based technology offers important advantages in terms of portability and human-computer interaction, as direct manipulation interfaces by direct pointing, and "notebook user interfaces metaphors". Being the human factors analysis and user interface design a vital stage to achieve the appropriate user acceptability, a methodology that integrates the "usability" evaluation from the earlier development stages was used. Additionally, the selection of a pen-based computer system as a portable device to be used by health care personnel allows to evaluate the appropriateness of this new technology for remote data collection within the hospital environment. The work presented is currently being realised under the Research Project "TANIT: Telematics in Anaesthesia and Intensive Care", within the "A.I.M.--Telematics in Health CARE" European Research Program. PMID:8130488

  8. An Investigation of the Usability of the Stylus Pen for Various Age Groups on Personal Digital Assistants

    ERIC Educational Resources Information Center

    Ren, Xiangshi; Zhou, Xiaolei

    2011-01-01

    Many handheld devices with stylus pens are available in the market; however, there have been few studies which examine the effects of the size of the stylus pen on user performance and subjective preferences for handheld device interfaces for various age groups. Two experiments (pen-length experiment and pen-tip width/pen-width experiment) were…

  9. The application of autostereoscopic display in smart home system based on mobile devices

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  10. Using Consumer Electronics and Apps in Industrial Environments - Development of a Framework for Dynamic Feature Deployment and Extension by Using Apps on Field Devices

    NASA Astrophysics Data System (ADS)

    Schmitt, Mathias

    2014-12-01

    The aim of this paper is to give a preliminary insight regarding the current work in the field of mobile interaction in industrial environments by using established interaction technologies and metaphors from the consumer goods industry. The major objective is the development and implementation of a holistic app-framework, which enables dynamic feature deployment and extension by using mobile apps on industrial field devices. As a result, field device functionalities can be updated and adapted effectively in accordance with well-known appconcepts from consumer electronics to comply with the urgent requirements of more flexible and changeable factory systems of the future. In addition, a much more user-friendly and utilizable interaction with field devices can be realized. Proprietary software solutions and device-stationary user interfaces can be overcome and replaced by uniform, cross-vendor solutions

  11. Body machine interfaces for neuromotor rehabilitation: a case study.

    PubMed

    Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Chen, David; Mussa-Ivaldi, Ferdinando A; Casadio, Maura

    2014-01-01

    High-level spinal cord injury (SCI) survivors face every day two related problems: recovering motor skills and regaining functional independence. Body machine interfaces (BoMIs) empower people with sever motor disabilities with the ability to control an external device, but they also offer the opportunity to focus concurrently on achieving rehabilitative goals. In this study we developed a portable, and low-cost BoMI that addresses both problems. The BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer monitor. By controlling the cursor, the user can perform functional tasks, such as entering text and playing games. This framework also allows the mapping between the body and the cursor space to be modified, gradually challenging the user to exercise more impaired movements. With this approach, we were able to change the behavior of our SCI subject, who initially used almost exclusively his less impaired degrees of freedom - on the left side - for controlling the BoMI. At the end of the few practice sessions he had restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom involved in the control of the interface. This is the first proof of concept that our BoMI can be used to control assistive devices and reach specific rehabilitative goals simultaneously.

  12. Vision based interface system for hands free control of an intelligent wheelchair

    PubMed Central

    Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi

    2009-01-01

    Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people. PMID:19660132

  13. 10 CFR Appendix A1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device incorporated into the... interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5“Automatic defrost” means a...

  14. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  15. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3 “Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  16. 10 CFR Appendix B1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... defined in HRF-1-1979 in cubic feet, times (2) an adjustment factor. 1.2“Anti-sweat heater” means a device... surfaces of the cabinet. 1.3“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.4“Automatic Defrost” means a system in...

  17. 10 CFR Appendix B1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Freezers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... defined in HRF-1-1979 in cubic feet, times (2) an adjustment factor. 1.2“Anti-sweat heater” means a device... surfaces of the cabinet. 1.3“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.4“Automatic Defrost” means a system in...

  18. 10 CFR Appendix A1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... less for the freezing and storage of ice. 1.3 “Anti-sweat heater” means a device incorporated into the... interior surfaces of the cabinet. 1.4 “Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5 “Automatic defrost” means a...

  19. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... capacity (14.2 liters) or less for the freezing and storage of ice. 1.3“Anti-sweat heater” means a device... on the exterior or interior surfaces of the cabinet. 1.4“Anti-sweat heater switch” means a user-controllable switch or user interface which modifies the activation or control of anti-sweat heaters. 1.5...

  20. Neuromuscular interfacing: establishing an EMG-driven model for the human elbow joint.

    PubMed

    Pau, James W L; Xie, Shane S Q; Pullan, Andrew J

    2012-09-01

    Assistive devices aim to mitigate the effects of physical disability by aiding users to move their limbs or by rehabilitating through therapy. These devices are commonly embodied by robotic or exoskeletal systems that are still in development and use the electromyographic (EMG) signal to determine user intent. Not much focus has been placed on developing a neuromuscular interface (NI) that solely relies on the EMG signal, and does not require modifications to the end user's state to enhance the signal (such as adding weights). This paper presents the development of a flexible, physiological model for the elbow joint that is leading toward the implementation of an NI, which predicts joint motion from EMG signals for both able-bodied and less-abled users. The approach uses musculotendon models to determine muscle contraction forces, a proposed musculoskeletal model to determine total joint torque, and a kinematic model to determine joint rotational kinematics. After a sensitivity analysis and tuning using genetic algorithms, subject trials yielded an average root-mean-square error of 6.53° and 22.4° for a single cycle and random cycles of movement of the elbow joint, respectively. This helps us to validate the elbow model and paves the way toward the development of an NI.

  1. Using SDI-12 with ST microelectronics MCU's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saari, Alexandra; Hinzey, Shawn Adrian; Frigo, Janette Rose

    2015-09-03

    ST Microelectronics microcontrollers and processors are readily available, capable and economical processors. Unfortunately they lack a broad user base like similar offerings from Texas Instrument, Atmel, or Microchip. All of these devices could be useful in economical devices for remote sensing applications used with environmental sensing. With the increased need for environmental studies, and limited budgets, flexibility in hardware is very important. To that end, and in an effort to increase open support of ST devices, I am sharing my teams' experience in interfacing a common environmental sensor communication protocol (SDI-12) with ST devices.

  2. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  3. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  4. Design and implementation of a seamless and comprehensive integrated medical device interface system for outpatient electronic medical records in a general hospital.

    PubMed

    Choi, Jong Soo; Lee, Jean Hyoung; Park, Jong Hwan; Nam, Han Seung; Kwon, Hyuknam; Kim, Dongsoo; Park, Seung Woo

    2011-04-01

    Implementing an efficient Electronic Medical Record (EMR) system is regarded as one of the key strategies for improving the quality of healthcare services. However, the system's interoperability between medical devices and the EMR is a big barrier to deploying the EMR system in an outpatient clinical setting. The purpose of this study is to design a framework for a seamless and comprehensively integrated medical device interface system, and to develop and implement a system for accelerating the deployment of the EMR system. We designed and developed a framework that could transform data from medical devices into the relevant standards and then store them in the EMR. The framework is composed of 5 interfacing methods according to the types of medical devices utilized at an outpatient clinical setting, registered in Samsung Medical Center (SMC) database. The medical devices used for this study were devices that have microchips embedded or that came packaged with personal computers. The devices are completely integrated with the EMR based on SMC's long term IT strategies. First deployment of integrating 352 medical devices into the EMR took place in April, 2006, and it took about 48 months. By March, 2010, every medical device was interfaced with the EMR. About 66,000 medical examinations per month were performed taking up an average of 50GB of storage space. We surveyed users, mainly the technicians. Out of 73 that responded, 76% of the respondents replied that they were strongly satisfied or satisfied, 20% replied as being neutral and only 4% complained about the speed of the system, which was attributed to the slow speed of the old-fashioned medical devices and computers. The current implementation of the medical device interface system based on the SMC framework significantly streamlines the clinical workflow in a satisfactory manner. 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Integrated Computer Controlled Glow Discharge Tube

    NASA Astrophysics Data System (ADS)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  6. Using Mobile Devices to Display, Overlay, and Animate Geophysical Data and Imagery

    NASA Astrophysics Data System (ADS)

    Batzli, S.; Parker, D.

    2011-12-01

    A major challenge in mobile-device map application development is to offer rich content and features with simple and intuitive controls and fast performance. Our goal is to bring visualization, animation, and notifications of near real-time weather and earth observation information derived from satellite and sensor data to mobile devices. Our robust back-end processing infrastructure can deliver content in the form of images, shapes, standard descriptive formats (eg. KML, JSON) or raw data to a variety of desktop software, browsers, and mobile devices on demand. We have developed custom interfaces for low-bandwidth browsers (including mobile phones) and high-feature browsers (including smartphones), as well as native applications for Android and iOS devices. Mobile devices offer time- and location-awareness and persistent data connections, allowing us to tailor timely notifications and displays to the user's geographic and time context. This presentation includes a live demo of how our mobile apps deliver animation of standard and custom data products in an interactive map interface.

  7. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  8. Onboard System Evaluation of Rotors Vibration, Engines (OBSERVE) monitoring System

    DTIC Science & Technology

    1992-07-01

    consists of a Data Acquisiiton Unit (DAU), Control and Display Unit ( CADU ), Universal Tracking Devices (UTD), Remote Cockpit Display (RCD) and a PC...and Display Unit ( CADU ) - The CADU provides data storage and a graphical user interface neccesary to display both the measured data and diagnostic...information. The CADU has an interface to a Credit Card Memory (CCM) which operates similar to a disk drive, allowing the storage of data and programs. The

  9. Gloved Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  10. The Effect of Input Device on User Performance With a Menu-Based Natural Language Interface

    DTIC Science & Technology

    1988-01-01

    Texas. The experiment was conducted and the data were analyzed by Virginia Polytechnic Institute and State University human factors engineexing personnel...comments. Thanks to Dr. William Fisher for his help in the parsing of the grammar used in the MBNL interface prototype, and to Mr. Ken Stevenson for...natural language instructions to accomplish particular tasks (Bobrow & Collins, 1975; Brown, Burton, & Bell, 1975; Ford, 1981; Green, Wolf, Chomsky

  11. Evaluation of user interface and workflow design of a bedside nursing clinical decision support system.

    PubMed

    Yuan, Michael Juntao; Finley, George Mike; Long, Ju; Mills, Christy; Johnson, Ron Kim

    2013-01-31

    Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. The evaluation has shown that our design was functional and met the requirements demanded by the nurses' tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction.

  12. A user-system interface agent

    NASA Technical Reports Server (NTRS)

    Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua

    1995-01-01

    Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.

  13. Human-scale interaction for virtual model displays: a clear case for real tools

    NASA Astrophysics Data System (ADS)

    Williams, George C.; McDowall, Ian E.; Bolas, Mark T.

    1998-04-01

    We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.

  14. Biosleeve Human-Machine Interface

    NASA Technical Reports Server (NTRS)

    Assad, Christopher (Inventor)

    2016-01-01

    Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device.

  15. Transparently Interposing User Code at the System Interface

    DTIC Science & Technology

    1992-09-01

    trademarks of Symantec Corporation. AFS is a trademark of Transarc Corporation. PC-cillin is a trademark of Trend Micro Devices, Incorporated. Scribe is a...communication. Finally, both the Norton AntiVirus [Symantec 91b] and PC-cillin [ Trend 90] anti-virus applications intercept destructive file operations made... Trend Micro Devices, Incorporated, 1990. [Tygar & Yee 91] J. D. Tygar, Bennet Yee. Dyad: A System for Using Physically Secure Coprocessors

  16. Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia.

    PubMed

    Thorp, Elias B; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica P; Pierella, Camilla; Roth, Elliot J; Seanez Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A

    2016-02-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user's residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional control commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control a power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control.

  17. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors.

    PubMed

    Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah

    2015-07-09

    Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user's hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces.

  18. Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1997-01-01

    Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  19. Six axis force feedback input device

    NASA Technical Reports Server (NTRS)

    Ohm, Timothy (Inventor)

    1998-01-01

    The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.

  20. Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J

    2011-01-01

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less

  1. Natural interaction for unmanned systems

    NASA Astrophysics Data System (ADS)

    Taylor, Glenn; Purman, Ben; Schermerhorn, Paul; Garcia-Sampedro, Guillermo; Lanting, Matt; Quist, Michael; Kawatsu, Chris

    2015-05-01

    Military unmanned systems today are typically controlled by two methods: tele-operation or menu-based, search-andclick interfaces. Both approaches require the operator's constant vigilance: tele-operation requires constant input to drive the vehicle inch by inch; a menu-based interface requires eyes on the screen in order to search through alternatives and select the right menu item. In both cases, operators spend most of their time and attention driving and minding the unmanned systems rather than on being a warfighter. With these approaches, the platform and interface become more of a burden than a benefit. The availability of inexpensive sensor systems in products such as Microsoft Kinect™ or Nintendo Wii™ has resulted in new ways of interacting with computing systems, but new sensors alone are not enough. Developing useful and usable human-system interfaces requires understanding users and interaction in context: not just what new sensors afford in terms of interaction, but how users want to interact with these systems, for what purpose, and how sensors might enable those interactions. Additionally, the system needs to reliably make sense of the user's inputs in context, translate that interpretation into commands for the unmanned system, and give feedback to the user. In this paper, we describe an example natural interface for unmanned systems, called the Smart Interaction Device (SID), which enables natural two-way interaction with unmanned systems including the use of speech, sketch, and gestures. We present a few example applications SID to different types of unmanned systems and different kinds of interactions.

  2. XOP: a multiplatform graphical user interface for synchrotron radiation spectral and optics calculations

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Dejus, Roger J.

    1997-11-01

    XOP (X-ray OPtics utilities) is a graphical user interface (GUI) created to execute several computer programs that calculate the basic information needed by a synchrotron beamline scientist (designer or experimentalist). Typical examples of such calculations are: insertion device (undulator or wiggler) spectral and angular distributions, mirror and multilayer reflectivities, and crystal diffraction profiles. All programs are provided to the user under a unified GUI, which greatly simplifies their execution. The XOP optics applications (especially mirror calculations) take their basic input (optical constants, compound and mixture tables) from a flexible file-oriented database, which allows the user to select data from a large number of choices and also to customize their own data sets. XOP includes many mathematical and visualization capabilities. It also permits the combination of reflectivities from several mirrors and filters, and their effect, onto a source spectrum. This feature is very useful when calculating thermal load on a series of optical elements. The XOP interface is written in the IDL (Interactive Data Language). An embedded version of XOP, which freely runs under most Unix platforms (HP, Sun, Dec, Linux, etc) and under Windows95 and NT, is available upon request.

  3. Gaming control using a wearable and wireless EEG-based brain-computer interface device with novel dry foam-based sensors

    PubMed Central

    2012-01-01

    A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering. PMID:22284235

  4. WADeG Cell Phone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-09-01

    The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less

  5. Mechatronics Interface for Computer Assisted Prostate Surgery Training

    NASA Astrophysics Data System (ADS)

    Altamirano del Monte, Felipe; Padilla Castañeda, Miguel A.; Arámbula Cosío, Fernando

    2006-09-01

    In this work is presented the development of a mechatronics device to simulate the interaction of the surgeon with the surgical instrument (resectoscope) used during a Transurethral Resection of the Prostate (TURP). Our mechatronics interface is part of a computer assisted system for training in TURP, which is based on a 3D graphics model of the prostate which can be deformed and resected interactively by the user. The mechatronics interface, is the device that the urology residents will manipulate to simulate the movements performed during surgery. Our current prototype has five degrees of freedom, which are enough to have a realistic simulation of the surgery movements. Two of these degrees of freedom are linear, to determinate the linear displacement of the resecting loop and the other three are rotational to determinate three directions and amounts of rotation.

  6. Developing a smartphone interface for the Florida Environmental Public Health Tracking Web portal.

    PubMed

    Jordan, Melissa; DuClos, Chris; Folsom, John; Thomas, Rebecca

    2015-01-01

    As smartphone and tablet devices continue to proliferate, it is becoming increasingly important to tailor information delivery to the mobile device. The Florida Environmental Public Health Tracking Program recognized that the mobile device user needs Web content formatted to smaller screen sizes, simplified data displays, and reduced textual information. The Florida Environmental Public Health Tracking Program developed a smartphone-friendly version of the state Web portal for easier access by mobile device users. The resulting smartphone-friendly portal combines calculated data measures such as inpatient hospitalizations and emergency department visits and presents them grouped by county, along with temporal trend graphs. An abbreviated version of the public health messaging provided on the traditional Web portal is also provided, along with social media connections. As a result of these efforts, the percentage of Web site visitors using an iPhone tripled in just 1 year.

  7. Prototype of haptic device for sole of foot using magnetic field sensitive elastomer

    NASA Astrophysics Data System (ADS)

    Kikuchi, T.; Masuda, Y.; Sugiyama, M.; Mitsumata, T.; Ohori, S.

    2013-02-01

    Walking is one of the most popular activities and a healthy aerobic exercise for the elderly. However, if they have physical and / or cognitive disabilities, sometimes it is challenging to go somewhere they don't know well. The final goal of this study is to develop a virtual reality walking system that allows users to walk in virtual worlds fabricated with computer graphics. We focus on a haptic device that can perform various plantar pressures on users' soles of feet as an additional sense in the virtual reality walking. In this study, we discuss a use of a magnetic field sensitive elastomer (MSE) as a working material for the haptic interface on the sole. The first prototype with MSE was developed and evaluated in this work. According to the measurement of planter pressures, it was found that this device can perform different pressures on the sole of a light-weight user by applying magnetic field on the MSE. The result also implied necessities of the improvement of the magnetic circuit and the basic structure of the mechanism of the device.

  8. On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.

    PubMed

    Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen

    2014-07-01

    To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.

  9. Touch-screen technology for the dynamic display of -2D spatial information without vision: promise and progress.

    PubMed

    Klatzky, Roberta L; Giudice, Nicholas A; Bennett, Christopher R; Loomis, Jack M

    2014-01-01

    Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.

  10. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  11. Supporting the Loewenstein occupational therapy cognitive assessment using distributed user interfaces.

    PubMed

    Tesoriero, Ricardo; Gallud Lazaro, Jose A; Altalhi, Abdulrahman H

    2017-02-01

    Improve the quantity and quality of information obtained from traditional Loewenstein Occupational Therapy Cognitive Assessment Battery systems to monitor the evolution of patients' rehabilitation process as well as to compare different rehabilitation therapies. The system replaces traditional artefacts with virtual versions of them to take advantage of cutting edge interaction technology. The system is defined as a Distributed User Interface (DUI) supported by a display ecosystem, including mobile devices as well as multi-touch surfaces. Due to the heterogeneity of the devices involved in the system, the software technology is based on a client-server architecture using the Web as the software platform. The system provides therapists with information that is not available (or it is very difficult to gather) using traditional technologies (i.e. response time measurements, object tracking, information storage and retrieval facilities, etc.). The use of DUIs allows therapists to gather information that is unavailable using traditional assessment methods as well as adapt the system to patients' profile to increase the range of patients that are able to take this assessment. Implications for Rehabilitation Using a Distributed User Interface environment to carry out LOTCAs improves the quality of the information gathered during the rehabilitation assessment. This system captures physical data regarding patient's interaction during the assessment to improve the rehabilitation process analysis. Allows professionals to adapt the assessment procedure to create different versions according to patients' profile. Improves the availability of patients' profile information to therapists to adapt the assessment procedure.

  12. Using Zigbee to integrate medical devices.

    PubMed

    Frehill, Paul; Chambers, Desmond; Rotariu, Cosmin

    2007-01-01

    Wirelessly enabling Medical Devices such as Vital Signs Monitors, Ventilators and Infusion Pumps allows central data collection. This paper discusses how data from these types of devices can be integrated into hospital systems using wireless sensor networking technology. By integrating devices you are protecting investment and opening up the possibility of networking with similar devices. In this context we present how Zigbee meets our requirements for bandwidth, power, security and mobility. We have examined the data throughputs for various medical devices, the requirement of data frequency, security of patient data and the logistics of moving patients while connected to devices. The paper describes a new tested architecture that allows this data to be seamlessly integrated into a User Interface or Healthcare Information System (HIS). The design supports the dynamic addition of new medical devices to the system that were previously unsupported by the system. To achieve this, the hardware design is kept generic and the software interface for different types of medical devices is well defined. These devices can also share the wireless resources with other types of sensors being developed in conjunction on this project such as wireless ECG (Electrocardiogram) and Pulse-Oximetry sensors.

  13. StarTrax --- The Next Generation User Interface

    NASA Astrophysics Data System (ADS)

    Richmond, Alan; White, Nick

    StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.

  14. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  15. Design and implementation of a cartographic client application for mobile devices using SVG Tiny and J2ME

    NASA Astrophysics Data System (ADS)

    Hui, L.; Behr, F.-J.; Schröder, D.

    2006-10-01

    The dissemination of digital geospatial data is available now on mobile devices such as PDAs (personal digital assistants) and smart-phones etc. The mobile devices which support J2ME (Java 2 Micro Edition) offer users and developers one open interface, which they can use to develop or download the software according their own demands. Currently WMS (Web Map Service) can afford not only traditional raster image, but also the vector image. SVGT (Scalable Vector Graphics Tiny) is one subset of SVG (Scalable Vector Graphics) and because of its precise vector information, original styling and small file size, SVGT format is fitting well for the geographic mapping purpose, especially for the mobile devices which has bandwidth net connection limitation. This paper describes the development of a cartographic client for the mobile devices, using SVGT and J2ME technology. Mobile device will be simulated on the desktop computer for a series of testing with WMS, for example, send request and get the responding data from WMS and then display both vector and raster format image. Analyzing and designing of System structure such as user interface and code structure are discussed, the limitation of mobile device should be taken into consideration for this applications. The parsing of XML document which is received from WMS after the GetCapabilities request and the visual realization of SVGT and PNG (Portable Network Graphics) image are important issues in codes' writing. At last the client was tested on Nokia S40/60 mobile phone successfully.

  16. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  17. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  19. An Innovative Speech-Based User Interface for Smarthomes and IoT Solutions to Help People with Speech and Motor Disabilities.

    PubMed

    Malavasi, Massimiliano; Turri, Enrico; Atria, Jose Joaquin; Christensen, Heidi; Marxer, Ricard; Desideri, Lorenzo; Coy, Andre; Tamburini, Fabio; Green, Phil

    2017-01-01

    A better use of the increasing functional capabilities of home automation systems and Internet of Things (IoT) devices to support the needs of users with disability, is the subject of a research project currently conducted by Area Ausili (Assistive Technology Area), a department of Polo Tecnologico Regionale Corte Roncati of the Local Health Trust of Bologna (Italy), in collaboration with AIAS Ausilioteca Assistive Technology (AT) Team. The main aim of the project is to develop experimental low cost systems for environmental control through simplified and accessible user interfaces. Many of the activities are focused on automatic speech recognition and are developed in the framework of the CloudCAST project. In this paper we report on the first technical achievements of the project and discuss future possible developments and applications within and outside CloudCAST.

  20. The Cortex project A quasi-real-time information system to build control systems for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Barillere, R.; Cabel, H.; Chan, B.; Goulas, I.; Le Goff, J. M.; Vinot, L.; Willmott, C.; Milcent, H.; Huuskonen, P.

    1994-12-01

    The Cortex control information system framework is being developed at CERN. It offers basic functions to allow the sharing of information, control and analysis functions; it presents a uniform human interface for such information and functions; it permits upgrades and additions without code modification and it is sufficiently generic to allow its use by most of the existing or future control systems at CERN. Services will include standard interfaces to user-supplied functions, analysis, archive and event management. Cortex does not attempt to carry out the direct data acquisition or control of the devices; these are activities which are highly specific to the application and are best done by commercial systems or user-written programs. Instead, Cortex integrates these application-specific pieces and supports them by supplying other commonly needed facilities such as collaboration, analysis, diagnosis and user assistance.

  1. The EPICS-based remote control system for muon beam line devices at J-PARC MUSE

    NASA Astrophysics Data System (ADS)

    Ito, T. U.; Nakahara, K.; Kawase, M.; Fujimori, H.; Kobayashi, Y.; Higemoto, W.; Miyake, Y.

    2010-04-01

    The remote control system for muon beam line devices of J-PARC MUSE has been developed with the Experimental Physics and Industrial Control System (EPICS). The EPICS input/output controller was installed in standard Linux PCs for slow control of the devices. Power supplies for 21 magnetic elements and four slit controllers for the decay-surface muon beam line in the Materials and Life Science Experimental Facility are now accessible via Ethernet from a graphical user interface which has been composed using the Motif Editor and Display Manger.

  2. Eye-movements and Voice as Interface Modalities to Computer Systems

    NASA Astrophysics Data System (ADS)

    Farid, Mohsen M.; Murtagh, Fionn D.

    2003-03-01

    We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.

  3. Mobile app for chemical detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klunder, Gregory; Cooper, Chadway R.; Satcher, Jr., Joe H.

    The present invention incorporates the camera from a mobile device (phone, iPad, etc.) to capture an image from a chemical test kit and process the image to provide chemical information. A simple user interface enables the automatic evaluation of the image, data entry, gps info, and maintain records from previous analyses.

  4. Multi-degree of freedom joystick for virtual reality simulation.

    PubMed

    Head, M J; Nelson, C A; Siu, K C

    2013-11-01

    A modular control interface and simulated virtual reality environment were designed and created in order to determine how the kinematic architecture of a control interface affects minimally invasive surgery training. A user is able to selectively determine the kinematic configuration of an input device (number, type and location of degrees of freedom) for a specific surgical simulation through the use of modular joints and constraint components. Furthermore, passive locking was designed and implemented through the use of inflated latex tubing around rotational joints in order to allow a user to step away from a simulation without unwanted tool motion. It is believed that these features will facilitate improved simulation of a variety of surgical procedures and, thus, improve surgical skills training.

  5. A flexible microcontroller-based data acquisition device.

    PubMed

    Hercog, Darko; Gergič, Bojan

    2014-06-02

    This paper presents a low-cost microcontroller-based data acquisition device. The key component of the presented solution is a configurable microcontroller-based device with an integrated USB transceiver and a 12-bit analogue-to-digital converter (ADC). The presented embedded DAQ device contains a preloaded program (firmware) that enables easy acquisition and generation of analogue and digital signals and data transfer between the device and the application running on a PC via USB bus. This device has been developed as a USB human interface device (HID). This USB class is natively supported by most of the operating systems and therefore any installation of additional USB drivers is unnecessary. The input/output peripheral of the presented device is not static but rather flexible, and could be easily configured to customised needs without changing the firmware. When using the developed configuration utility, a majority of chip pins can be configured as analogue input, digital input/output, PWM output or one of the SPI lines. In addition, LabVIEW drivers have been developed for this device. When using the developed drivers, data acquisition and signal processing algorithms as well as graphical user interface (GUI), can easily be developed using a well-known, industry proven, block oriented LabVIEW programming environment.

  6. Leveraging Electronic Tablets for General Pediatric Care

    PubMed Central

    McKee, S.; Dugan, T.M.; Downs, S.M.

    2015-01-01

    Summary Background We have previously shown that a scan-able paper based interface linked to a computerized clinical decision support system (CDSS) can effectively screen patients in pediatric waiting rooms and support the physician using evidence based care guidelines at the time of clinical encounter. However, the use of scan-able paper based interface has many inherent limitations including lacking real time communication with the CDSS and being prone to human and system errors. An electronic tablet based user interface can not only overcome these limitations, but may also support advanced functionality for clinical and research use. However, use of such devices for pediatric care is not well studied in clinical settings. Objective In this pilot study, we enhance our pediatric CDSS with an electronic tablet based user interface and evaluate it for usability as well as for changes in patient questionnaire completion rates. Methods Child Health Improvement through Computers Leveraging Electronic Tablets or CHICLET is an electronic tablet based user interface. It is developed to augment the existing scan-able paper interface to our CDSS. For the purposes of this study, we deployed CHICLET in one outpatient pediatric clinic. Usability factors for CHICLET were evaluated via caregiver and staff surveys. Results When compared to the scan-able paper based interface, we observed an 18% increase or 30% relative increase in question completion rates using CHICLET. This difference was statistically significant. Caregivers and staff survey results were positive for using CHICLET in clinical environment. Conclusions Electronic tablets are a viable interface for capturing patient self-report in pediatric waiting rooms. We further hypothesize that the use of electronic tablet based interfaces will drive advances in computerized clinical decision support and create opportunities for patient engagement. PMID:25848409

  7. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review

    PubMed Central

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti

    2016-01-01

    Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence. PMID:27702737

  8. Electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and graphical user interface platform for aviation industries training purposes

    NASA Astrophysics Data System (ADS)

    Burhan, I.; Azman, A. A.; Othman, R.

    2016-10-01

    An electro pneumatic trainer embedded with programmable integrated circuit (PIC) microcontroller and Visual Basic (VB) platform is fabricated as a supporting tool to existing teaching and learning process, and to achieve the objectives and learning outcomes towards enhancing the student's knowledge and hands-on skill, especially in electro pneumatic devices. The existing learning process for electro pneumatic courses conducted in the classroom does not emphasize on simulation and complex practical aspects. VB is used as the platform for graphical user interface (GUI) while PIC as the interface circuit between the GUI and hardware of electro pneumatic apparatus. Fabrication of electro pneumatic trainer interfacing between PIC and VB has been designed and improved by involving multiple types of electro pneumatic apparatus such as linear drive, air motor, semi rotary motor, double acting cylinder and single acting cylinder. Newly fabricated electro pneumatic trainer microcontroller interface can be programmed and re-programmed for numerous combination of tasks. Based on the survey to 175 student participants, 97% of the respondents agreed that the newly fabricated trainer is user friendly, safe and attractive, and 96.8% of the respondents strongly agreed that there is improvement in knowledge development and also hands-on skill in their learning process. Furthermore, the Lab Practical Evaluation record has indicated that the respondents have improved their academic performance (hands-on skills) by an average of 23.5%.

  9. Digital Environment for Movement Control in Surgical Skill Training.

    PubMed

    Juanes, Juan A; Gómez, Juan J; Peguero, Pedro D; Ruisoto, Pablo

    2016-06-01

    Intelligent environments are increasingly becoming useful scenarios for handling computers. Technological devices are practical tools for learning and acquiring clinical skills as part of the medical training process. Within the framework of the advanced user interface, we present a technological application using Leap Motion, to enhance interaction with the user in the process of a laparoscopic surgical intervention and integrate the navigation through augmented reality images using manual gestures. Thus, we intend to achieve a more natural interaction with the objects that participate in a surgical intervention, which are augmented and related to the user's hand movements.

  10. Implementing virtual reality interfaces for the geosciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, W.; Jacobsen, J.; Austin, A.

    1996-06-01

    For the past few years, a multidisciplinary team of computer and earth scientists at Lawrence Berkeley National Laboratory has been exploring the use of advanced user interfaces, commonly called {open_quotes}Virtual Reality{close_quotes} (VR), coupled with visualization and scientific computing software. Working closely with industry, these efforts have resulted in an environment in which VR technology is coupled with existing visualization and computational tools. VR technology may be thought of as a user interface. It is useful to think of a spectrum, ranging the gamut from command-line interfaces to completely immersive environments. In the former, one uses the keyboard to enter threemore » or six-dimensional parameters. In the latter, three or six-dimensional information is provided by trackers contained either in hand-held devices or attached to the user in some fashion, e.g. attached to a head-mounted display. Rich, extensible and often complex languages are a vehicle whereby the user controls parameters to manipulate object position and location in a virtual world, but the keyboard is the obstacle in that typing is cumbersome, error-prone and typically slow. In the latter, the user can interact with these parameters by means of motor skills which are highly developed. Two specific geoscience application areas will be highlighted. In the first, we have used VR technology to manipulate three-dimensional input parameters, such as the spatial location of injection or production wells in a reservoir simulator. In the second, we demonstrate how VR technology has been used to manipulate visualization tools, such as a tool for computing streamlines via manipulation of a {open_quotes}rake.{close_quotes} The rake is presented to the user in the form of a {open_quotes}virtual well{close_quotes} icon, and provides parameters used by the streamlines algorithm.« less

  11. Ver-i-Fus: an integrated access control and information monitoring and management system

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C.; Reisman, James G.; Papelis, Yiannis E.

    1997-01-01

    This paper describes the Ver-i-Fus Integrated Access Control and Information Monitoring and Management (IAC-I2M) system that INTELNET Inc. has developed. The Ver-i-Fus IAC-I2M system has been designed to meet the most stringent security and information monitoring requirements while allowing two- way communication between the user and the system. The systems offers a flexible interface that permits to integrate practically any sensing device, or combination of sensing devices, including a live-scan fingerprint reader, thus providing biometrics verification for enhanced security. Different configurations of the system provide solutions to different sets of access control problems. The re-configurable hardware interface, tied together with biometrics verification and a flexible interface that allows to integrate Ver-i-Fus with an MIS, provide an integrated solution to security, time and attendance, labor monitoring, production monitoring, and payroll applications.

  12. Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices.

    PubMed

    Mun, Seongcheol; Yun, Sungryul; Nam, Saekwang; Park, Seung Koo; Park, Suntak; Park, Bong Je; Lim, Jeong Mook; Kyung, Ki-Uk

    2018-01-01

    This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.

  13. COM1/348: Design and Implementation of a Portal for the Market of the Medical Equipment (MEDICOM)

    PubMed Central

    Palamas, S; Vlachos, I; Panou-Diamandi, O; Marinos, G; Kalivas, D; Zeelenberg, C; Nimwegen, C; Koutsouris, D

    1999-01-01

    Introduction The MEDICOM system provides the electronic means for medical equipment manufacturers to communicate online with their customers supporting the Purchasing Process and the Post Market Surveillance. The MEDICOM service will be provided over the Internet by the MEDICOM Portal, and by a set of distributed subsystems dedicated to handle structured information related to medical devices. There are three kinds of these subsystems, the Hypermedia Medical Catalogue (HMC), Virtual Medical Exhibition (VME), which contains information in a form of Virtual Models, and the Post Market Surveillance system (PMS). The Universal Medical Devices Nomenclature System (UMDNS) is used to register all products. This work was partially funded by the ESPRIT Project 25289 (MEDICOM). Methods The Portal provides the end user interface operating as the MEDICOM Portal, acts as the yellow pages for finding both products and providers, providing links to the providers servers, implements the system management and supports the subsystem database compatibility. The Portal hosts a database system composed of two parts: (a) the Common Database, which describes a set of encoded parameters (like Supported Languages, Geographic Regions, UMDNS Codes, etc) common to all subsystems and (b) the Short Description Database, which contains summarised descriptions of medical devices, including a text description, the codes of the manufacturer, UMDNS code, attribute values and links to the corresponding HTML pages of the HMC, VME and PMS servers. The Portal provides the MEDICOM user interface including services like end user profiling and registration, end user query forms, creation and hosting of newsgroups, links to online libraries, end user subscription to manufacturers' mailing lists, online information for the MEDICOM system and special messages or advertisements from manufacturers. Results Platform independence and interoperability characterise the system design. A general purpose RDBMS is used for the implementation of the databases. The end user interface is implemented using HTML and Java applets, while the subsystem administration applications are developed using Java. The JDBC interface is used in order to provide database access to these applications. The communication between subsystems is implemented using CORBA objects and Java servlets are used in subsystem servers for the activation of remote operations. Discussion In the second half of 1999, the MEDICOM Project will enter the phase of evaluation and pilot operation. The benefits of the MEDICOM system are expected to be the establishment of a world wide accessible marketplace between providers and health care professionals. The latter will achieve the provision of up-to-date and high quality products information in an easy and friendly way, and the enhancement of the marketing procedures and after sales support efficiency.

  14. [Prevention of medical device-related adverse events in hospitals: Specifying the recommendations of the German Coalition for Patient Safety (APS) for users and operators of anaesthesia equipment].

    PubMed

    Bohnet-Joschko, Sabine; Zippel, Claus; Siebert, Hartmut

    2015-01-01

    The use and organisation of medical technology has an important role to play for patient and user safety in anaesthesia. Specification of the recommendations of the German Coalition for Patient Safety (APS) for users and operators of anaesthesia equipment, explore opportunities and challenges for the safe use and organisation of anaesthesia devices. We conducted a literature search in Medline/PubMed for studies dealing with the APS recommendations for the prevention of medical device-related risks in the context of anaesthesia. In addition, we performed an internet search for reports and recommendations focusing on the use and organisation of medical devices in anaesthesia. Identified studies were grouped and assigned to the recommendations. The division into users and operators was maintained. Instruction and training in anaesthesia machines is sometimes of minor importance. Failure to perform functional testing seems to be a common cause of critical incidents in anaesthesia. There is a potential for reporting to the federal authority. Starting points for the safe operation of anaesthetic devices can be identified, in particular, at the interface of staff, organisation, and (anaesthesia) technology. The APS recommendations provide valuable information on promoting the safe use of medical devices and organisation in anaesthesia. The focus will be on risks relating to the application as well as on principles and materials for the safe operation of anaesthesia equipment. Copyright © 2015. Published by Elsevier GmbH.

  15. A pilot study comparing mouse and mouse-emulating interface devices for graphic input.

    PubMed

    Kanny, E M; Anson, D K

    1991-01-01

    Adaptive interface devices make it possible for individuals with physical disabilities to use microcomputers and thus perform many tasks that they would otherwise be unable to accomplish. Special equipment is available that purports to allow functional access to the computer for users with disabilities. As technology moves from purely keyboard applications to include graphic input, it will be necessary for assistive interface devices to support graphics as well as text entry. Headpointing systems that emulate the mouse in combination with on-screen keyboards are of particular interest to persons with severe physical impairment such as high level quadriplegia. Two such systems currently on the market are the HeadMaster and the Free Wheel. The authors have conducted a pilot study comparing graphic input speed using the mouse and two headpointing interface systems on the Macintosh computer. The study used a single subject design with six able-bodied subjects, to establish a baseline for comparison with persons with severe disabilities. Results of these preliminary data indicated that the HeadMaster was nearly as effective as the mouse and that it was superior to the Free Wheel for graphics input. This pilot study, however, demonstrated several experimental design problems that need to be addressed to make the study more robust. It also demonstrated the need to include the evaluation of text input so that the effectiveness of the interface devices with text and graphic input could be compared.

  16. Evaluation of User Interface and Workflow Design of a Bedside Nursing Clinical Decision Support System

    PubMed Central

    Yuan, Michael Juntao; Finley, George Mike; Mills, Christy; Johnson, Ron Kim

    2013-01-01

    Background Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Objective Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Methods Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. Results A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. Conclusions The evaluation has shown that our design was functional and met the requirements demanded by the nurses’ tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction. PMID:23612350

  17. Haptic Stylus and Empirical Studies on Braille, Button, and Texture Display

    PubMed Central

    Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok

    2008-01-01

    This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor. PMID:18317520

  18. Haptic stylus and empirical studies on braille, button, and texture display.

    PubMed

    Kyung, Ki-Uk; Lee, Jun-Young; Park, Junseok

    2008-01-01

    This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor.

  19. Laboratory process control using natural language commands from a personal computer

    NASA Technical Reports Server (NTRS)

    Will, Herbert A.; Mackin, Michael A.

    1989-01-01

    PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

  20. Asynchronous P300-based brain-computer interface to control a virtual environment: initial tests on end users.

    PubMed

    Aloise, Fabio; Schettini, Francesca; Aricò, Pietro; Salinari, Serenella; Guger, Christoph; Rinsma, Johanna; Aiello, Marco; Mattia, Donatella; Cincotti, Febo

    2011-10-01

    Motor disability and/or ageing can prevent individuals from fully enjoying home facilities, thus worsening their quality of life. Advances in the field of accessible user interfaces for domotic appliances can represent a valuable way to improve the independence of these persons. An asynchronous P300-based Brain-Computer Interface (BCI) system was recently validated with the participation of healthy young volunteers for environmental control. In this study, the asynchronous P300-based BCI for the interaction with a virtual home environment was tested with the participation of potential end-users (clients of a Frisian home care organization) with limited autonomy due to ageing and/or motor disabilities. System testing revealed that the minimum number of stimulation sequences needed to achieve correct classification had a higher intra-subject variability in potential end-users with respect to what was previously observed in young controls. Here we show that the asynchronous modality performed significantly better as compared to the synchronous mode in continuously adapting its speed to the users' state. Furthermore, the asynchronous system modality confirmed its reliability in avoiding misclassifications and false positives, as previously shown in young healthy subjects. The asynchronous modality may contribute to filling the usability gap between BCI systems and traditional input devices, representing an important step towards their use in the activities of daily living.

  1. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  2. To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.

    PubMed

    Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke

    2013-01-01

    Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.

  3. CE-SAM: a conversational interface for ISR mission support

    NASA Astrophysics Data System (ADS)

    Pizzocaro, Diego; Parizas, Christos; Preece, Alun; Braines, Dave; Mott, David; Bakdash, Jonathan Z.

    2013-05-01

    There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be "owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.

  4. Glove-TalkII--a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1998-01-01

    Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  5. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  6. Customizable scientific web-portal for DIII-D nuclear fusion experiment

    NASA Astrophysics Data System (ADS)

    Abla, G.; Kim, E. N.; Schissel, D. P.

    2010-04-01

    Increasing utilization of the Internet and convenient web technologies has made the web-portal a major application interface for remote participation and control of scientific instruments. While web-portals have provided a centralized gateway for multiple computational services, the amount of visual output often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments. Since each scientist may have different priorities and areas of interest in the experiment, filtering and organizing information based on the individual user's need can increase the usability and efficiency of a web-portal. DIII-D is the largest magnetic nuclear fusion device in the US. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It offers a customizable interface with personalized page layouts and list of services for users to select. Each individual user can create a unique working environment to fit his own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data analysis and visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, and online instant announcement services. The DIII-D web-portal development utilizes multi-tier software architecture, and Web 2.0 technologies and tools, such as AJAX and Django, to develop a highly-interactive and customizable user interface.

  7. Video game interfaces for interactive lower and upper member therapy.

    PubMed

    Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron; Alves, Silas

    2013-01-01

    With recent advances in electronics and mechanics, a new trend in interaction is taking place changing how we interact with our environment, daily tasks and other people. Even though sensor based technologies and tracking systems have been around for several years, recently they have become affordable and used in several areas such as physical and mental rehabilitation, educational applications, physical exercises, and natural interactions, among others. This work presents the integration of two mainstream videogame interfaces as tools for developing an interactive lower and upper member therapy tool. The goal is to study the potential of these devices as complementing didactic elements for improving and following user performance during a series of exercises with virtual and real devices.

  8. The BCI competition. III: Validating alternative approaches to actual BCI problems.

    PubMed

    Blankertz, Benjamin; Müller, Klaus-Robert; Krusienski, Dean J; Schalk, Gerwin; Wolpaw, Jonathan R; Schlögl, Alois; Pfurtscheller, Gert; Millán, José del R; Schröder, Michael; Birbaumer, Niels

    2006-06-01

    A brain-computer interface (BCI) is a system that allows its users to control external devices with brain activity. Although the proof-of-concept was given decades ago, the reliable translation of user intent into device control commands is still a major challenge. Success requires the effective interaction of two adaptive controllers: the user's brain, which produces brain activity that encodes intent, and the BCI system, which translates that activity into device control commands. In order to facilitate this interaction, many laboratories are exploring a variety of signal analysis techniques to improve the adaptation of the BCI system to the user. In the literature, many machine learning and pattern classification algorithms have been reported to give impressive results when applied to BCI data in offline analyses. However, it is more difficult to evaluate their relative value for actual online use. BCI data competitions have been organized to provide objective formal evaluations of alternative methods. Prompted by the great interest in the first two BCI Competitions, we organized the third BCI Competition to address several of the most difficult and important analysis problems in BCI research. The paper describes the data sets that were provided to the competitors and gives an overview of the results.

  9. Assessing the Usability of Six Data Entry Mobile Interfaces for Caregivers: A Randomized Trial.

    PubMed

    Ehrler, Frederic; Haller, Guy; Sarrey, Evelyne; Walesa, Magali; Wipfli, Rolf; Lovis, Christian

    2015-12-15

    There is an increased demand in hospitals for tools, such as dedicated mobile device apps, that enable the recording of clinical information in an electronic format at the patient's bedside. Although the human-machine interface design on mobile devices strongly influences the accuracy and effectiveness of data recording, there is still a lack of evidence as to which interface design offers the best guarantee for ease of use and quality of recording. Therefore, interfaces need to be assessed both for usability and reliability because recording errors can seriously impact the overall level of quality of the data and affect the care provided. In this randomized crossover trial, we formally compared 6 handheld device interfaces for both speed of data entry and accuracy of recorded information. Three types of numerical data commonly recorded at the patient's bedside were used to evaluate the interfaces. In total, 150 health care professionals from the University Hospitals of Geneva volunteered to record a series of randomly generated data on each of the 6 interfaces provided on a smartphone. The interfaces were presented in a randomized order as part of fully automated data entry scenarios. During the data entry process, accuracy and effectiveness were automatically recorded by the software. Various types of errors occurred, which ranged from 0.7% for the most reliable design to 18.5% for the least reliable one. The length of time needed for data recording ranged from 2.81 sec to 14.68 sec, depending on the interface. The numeric keyboard interface delivered the best performance for pulse data entry with a mean time of 3.08 sec (SD 0.06) and an accuracy of 99.3%. Our study highlights the critical impact the choice of an interface can have on the quality of recorded data. Selecting an interface should be driven less by the needs of specific end-user groups or the necessity to facilitate the developer's task (eg, by opting for default solutions provided by commercial platforms) than by the level of speed and accuracy an interface can provide for recording information. An important effort must be made to properly validate mobile device interfaces intended for use in the clinical setting. In this regard, our study identified the numeric keyboard, among the proposed designs, as the most accurate interface for entering specific numerical values. This is an important step toward providing clearer guidelines on which interface to choose for the appropriate use of handheld device interfaces in the health care setting.

  10. Assessing the Usability of Six Data Entry Mobile Interfaces for Caregivers: A Randomized Trial

    PubMed Central

    Haller, Guy; Sarrey, Evelyne; Walesa, Magali; Wipfli, Rolf; Lovis, Christian

    2015-01-01

    Background There is an increased demand in hospitals for tools, such as dedicated mobile device apps, that enable the recording of clinical information in an electronic format at the patient’s bedside. Although the human-machine interface design on mobile devices strongly influences the accuracy and effectiveness of data recording, there is still a lack of evidence as to which interface design offers the best guarantee for ease of use and quality of recording. Therefore, interfaces need to be assessed both for usability and reliability because recording errors can seriously impact the overall level of quality of the data and affect the care provided. Objective In this randomized crossover trial, we formally compared 6 handheld device interfaces for both speed of data entry and accuracy of recorded information. Three types of numerical data commonly recorded at the patient’s bedside were used to evaluate the interfaces. Methods In total, 150 health care professionals from the University Hospitals of Geneva volunteered to record a series of randomly generated data on each of the 6 interfaces provided on a smartphone. The interfaces were presented in a randomized order as part of fully automated data entry scenarios. During the data entry process, accuracy and effectiveness were automatically recorded by the software. Results Various types of errors occurred, which ranged from 0.7% for the most reliable design to 18.5% for the least reliable one. The length of time needed for data recording ranged from 2.81 sec to 14.68 sec, depending on the interface. The numeric keyboard interface delivered the best performance for pulse data entry with a mean time of 3.08 sec (SD 0.06) and an accuracy of 99.3%. Conclusions Our study highlights the critical impact the choice of an interface can have on the quality of recorded data. Selecting an interface should be driven less by the needs of specific end-user groups or the necessity to facilitate the developer’s task (eg, by opting for default solutions provided by commercial platforms) than by the level of speed and accuracy an interface can provide for recording information. An important effort must be made to properly validate mobile device interfaces intended for use in the clinical setting. In this regard, our study identified the numeric keyboard, among the proposed designs, as the most accurate interface for entering specific numerical values. This is an important step toward providing clearer guidelines on which interface to choose for the appropriate use of handheld device interfaces in the health care setting. PMID:27025648

  11. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  12. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  13. Brain-computer interface technology: a review of the first international meeting.

    PubMed

    Wolpaw, J R; Birbaumer, N; Heetderks, W J; McFarland, D J; Peckham, P H; Schalk, G; Donchin, E; Quatrano, L A; Robinson, C J; Vaughan, T M

    2000-06-01

    Over the past decade, many laboratories have begun to explore brain-computer interface (BCI) technology as a radically new communication option for those with neuromuscular impairments that prevent them from using conventional augmentative communication methods. BCI's provide these users with communication channels that do not depend on peripheral nerves and muscles. This article summarizes the first international meeting devoted to BCI research and development. Current BCI's use electroencephalographic (EEG) activity recorded at the scalp or single-unit activity recorded from within cortex to control cursor movement, select letters or icons, or operate a neuroprosthesis. The central element in each BCI is a translation algorithm that converts electrophysiological input from the user into output that controls external devices. BCI operation depends on effective interaction between two adaptive controllers, the user who encodes his or her commands in the electrophysiological input provided to the BCI, and the BCI which recognizes the commands contained in the input and expresses them in device control. Current BCI's have maximum information transfer rates of 5-25 b/min. Achievement of greater speed and accuracy depends on improvements in signal processing, translation algorithms, and user training. These improvements depend on increased interdisciplinary cooperation between neuroscientists, engineers, computer programmers, psychologists, and rehabilitation specialists, and on adoption and widespread application of objective methods for evaluating alternative methods. The practical use of BCI technology depends on the development of appropriate applications, identification of appropriate user groups, and careful attention to the needs and desires of individual users. BCI research and development will also benefit from greater emphasis on peer-reviewed publications, and from adoption of standard venues for presentations and discussion.

  14. Cross-Platform User Interface of E-Learning Applications

    ERIC Educational Resources Information Center

    Stoces, Michal; Masner, Jan; Jarolímek, Jan; Šimek, Pavel; Vanek, Jirí; Ulman, Miloš

    2015-01-01

    The paper discusses the development of Web educational services for specific groups. A key feature is to allow the display and use of educational materials and training services to the widest possible set of different devices, especially in the browser classic desktop computers, notebooks, tablets, mobile phones and also on different readers for…

  15. A Kinect-Based Assessment System for Smart Classroom

    ERIC Educational Resources Information Center

    Kumara, W. G. C. W.; Wattanachote, Kanoksak; Battulga, Batbaatar; Shih, Timothy K.; Hwang, Wu-Yuin

    2015-01-01

    With the advancements of the human computer interaction field, nowadays it is possible for the users to use their body motions, such as swiping, pushing and moving, to interact with the content of computers or smart phones without traditional input devices like mouse and keyboard. With the introduction of gesture-based interface Kinect from…

  16. I-SAVE: AN INTERACTIVE REAL-TIME MONITOR AND CONTROLLER TO INFLUENCE ENERGY CONSERVATION BEHAVIOR BY IMPULSE SAVING

    EPA Science Inventory

    Simulation-based model to explore the benefits of monitoring and control to energy saving opportunities in residential homes; an adaptive algorithm to predict the type of electrical loads; a prototype user friendly interface monitoring and control device to save energy; a p...

  17. Soldier-Computer Interface

    DTIC Science & Technology

    2015-01-27

    placed on the user by the required tasks. Design areas that are of concern include seating , input and output device location and design , ambient...software, hardware, and workspace design for the test function of operability that influence operator performance in a computer-based system. 15...PRESENTATION ................... 23 APPENDIX A. SAMPLE DESIGN CHECKLISTS ...................................... A-1 B. SAMPLE TASK CHECKLISTS

  18. Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments.

    ERIC Educational Resources Information Center

    Woodruff, Allison; Aoki, Paul M.; Grinter, Rebecca E.; Hurst, Amy; Szymanski, Margaret H.; Thornton, James D.

    This paper describes an electronic guidebook, "Sotto Voce," that enables visitors to share audio information by eavesdropping on each others guidebook activity. The first section discusses the design and implementation of the guidebook device, key aspects of its user interface, the design goals for the audio environment, the eavesdropping…

  19. Method and apparatus for assessing weld quality

    DOEpatents

    Smartt, Herschel B.; Kenney, Kevin L.; Johnson, John A.; Carlson, Nancy M.; Clark, Denis E.; Taylor, Paul L.; Reutzel, Edward W.

    2001-01-01

    Apparatus for determining a quality of a weld produced by a welding device according to the present invention includes a sensor operatively associated with the welding device. The sensor is responsive to at least one welding process parameter during a welding process and produces a welding process parameter signal that relates to the at least one welding process parameter. A computer connected to the sensor is responsive to the welding process parameter signal produced by the sensor. A user interface operatively associated with the computer allows a user to select a desired welding process. The computer processes the welding process parameter signal produced by the sensor in accordance with one of a constant voltage algorithm, a short duration weld algorithm or a pulsed current analysis module depending on the desired welding process selected by the user. The computer produces output data indicative of the quality of the weld.

  20. Methods and apparatus for graphical display and editing of flight plans

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael J. (Inventor); Adams, Jr., Mike B. (Inventor); Chase, Karl L. (Inventor); Lewis, Daniel E. (Inventor); McCrobie, Daniel E. (Inventor); Omen, Debi Van (Inventor)

    2002-01-01

    Systems and methods are provided for an integrated graphical user interface which facilitates the display and editing of aircraft flight-plan data. A user (e.g., a pilot) located within the aircraft provides input to a processor through a cursor control device and receives visual feedback via a display produced by a monitor. The display includes various graphical elements associated with the lateral position, vertical position, flight-plan and/or other indicia of the aircraft's operational state as determined from avionics data and/or various data sources. Through use of the cursor control device, the user may modify the flight-plan and/or other such indicia graphically in accordance with feedback provided by the display. In one embodiment, the display includes a lateral view, a vertical profile view, and a hot-map view configured to simplify the display and editing of the aircraft's flight-plan data.

  1. Driving Interface Based on Tactile Sensors for Electric Wheelchairs or Trolleys

    PubMed Central

    Trujillo-León, Andrés; Vidal-Verdú, Fernando

    2014-01-01

    This paper introduces a novel device based on a tactile interface to replace the attendant joystick in electric wheelchairs. It can also be used in other vehicles such as shopping trolleys. Its use allows intuitive driving that requires little or no training, so its usability is high. This is achieved by a tactile sensor located on the handlebar of the chair or trolley and the processing of the information provided by it. When the user interacts with the handle of the chair or trolley, he or she exerts a pressure pattern that depends on the intention to accelerate, brake or turn to the left or right. The electronics within the device then perform the signal conditioning and processing of the information received, identifying the intention of the user on the basis of this pattern using an algorithm, and translating it into control signals for the control module of the wheelchair. These signals are equivalent to those provided by a joystick. This proposal aims to help disabled people and their attendees and prolong the personal autonomy in a context of aging populations. PMID:24518892

  2. Using Non-Traditional Interfaces to Support Physical Therapy for Knee Strengthening.

    PubMed

    Torres, Andrea; López, Gustavo; Guerrero, Luis A

    2016-09-01

    Physical therapy consists mainly in the execution of rehabilitation processes that aim to help overcome injuries, as well as develop, maintain, or restore maximum body movement. Knee rehabilitation is one kind of physical therapy that requires daily exercises which could be considered monotonous and boring by the patients, discouraging their improvement. This is coupled with the fact that most physical therapists assess exercise performance through verbal and visual means with mostly manual measurements, making it difficult to constantly verify and validate if patients perform the exercises correctly. This article describes a physical therapy monitoring system that uses wearable technology to assess exercise performance and patient progress. This wearable device is able to measure and transfer the movement's data from the patient's limb to a mobile device. Moreover, the user interface is a game, which provides an entertaining approach to therapy exercising. In this article, it is shown that the developed system significantly increases daily user engagement in rehabilitation exercises, through a gameplay that matches physical therapy requirements for knee rehabilitation, as well as offering useful quantitative information to therapists.

  3. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    NASA Astrophysics Data System (ADS)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  4. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  5. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  6. FermiLib v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCCLEAN, JARROD; HANER, THOMAS; STEIGER, DAMIAN

    FermiLib is an open source software package designed to facilitate the development and testing of algorithms for simulations of fermionic systems on quantum computers. Fermionic simulations represent an important application of early quantum devices with a lot of potential high value targets, such as quantum chemistry for the development of new catalysts. This software strives to provide a link between the required domain expertise in specific fermionic applications and quantum computing to enable more users to directly interface with, and develop for, these applications. It is an extensible Python library designed to interface with the high performance quantum simulator, ProjectQ,more » as well as application specific software such as PSI4 from the domain of quantum chemistry. Such software is key to enabling effective user facilities in quantum computation research.« less

  7. The investigation and implementation of real-time face pose and direction estimation on mobile computing devices

    NASA Astrophysics Data System (ADS)

    Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae

    2012-04-01

    The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.

  8. Double-heterojunction nanorod light-responsive LEDs for display applications.

    PubMed

    Oh, Nuri; Kim, Bong Hoon; Cho, Seong-Yong; Nam, Sooji; Rogers, Steven P; Jiang, Yiran; Flanagan, Joseph C; Zhai, You; Kim, Jae-Hwan; Lee, Jungyup; Yu, Yongjoon; Cho, Youn Kyoung; Hur, Gyum; Zhang, Jieqian; Trefonas, Peter; Rogers, John A; Shim, Moonsub

    2017-02-10

    Dual-functioning displays, which can simultaneously transmit and receive information and energy through visible light, would enable enhanced user interfaces and device-to-device interactivity. We demonstrate that double heterojunctions designed into colloidal semiconductor nanorods allow both efficient photocurrent generation through a photovoltaic response and electroluminescence within a single device. These dual-functioning, all-solution-processed double-heterojunction nanorod light-responsive light-emitting diodes open feasible routes to a variety of advanced applications, from touchless interactive screens to energy harvesting and scavenging displays and massively parallel display-to-display data communication. Copyright © 2017, American Association for the Advancement of Science.

  9. Is There a Chance for a Standardised User Interface?

    ERIC Educational Resources Information Center

    Fletcher, Liz

    1993-01-01

    Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)

  10. Creating a Prototype Web Application for Spacecraft Real-Time Data Visualization on Mobile Devices

    NASA Technical Reports Server (NTRS)

    Lang, Jeremy S.; Irving, James R.

    2014-01-01

    Mobile devices (smart phones, tablets) have become commonplace among almost all sectors of the workforce, especially in the technical and scientific communities. These devices provide individuals the ability to be constantly connected to any area of interest they may have, whenever and wherever they are located. The Huntsville Operations Support Center (HOSC) is attempting to take advantage of this constant connectivity to extend the data visualization component of the Payload Operations and Integration Center (POIC) to a person's mobile device. POIC users currently have a rather unique capability to create custom user interfaces in order to view International Space Station (ISS) payload health and status telemetry. These displays are used at various console positions within the POIC. The Software Engineering team has created a Mobile Display capability that will allow authenticated users to view the same displays created for the console positions on the mobile device of their choice. Utilizing modern technologies including ASP.net, JavaScript, and HTML5, we have created a web application that renders the user's displays in any modern desktop or mobile web browser, regardless of the operating system on the device. Additionally, the application is device aware which enables it to render its configuration and selection menus with themes that correspond to the particular device. The Mobile Display application uses a communication mechanism known as signalR to push updates to the web client. This communication mechanism automatically detects the best communication protocol between the client and server and also manages disconnections and reconnections of the client to the server. One benefit of this application is that the user can monitor important telemetry even while away from their console position. If expanded to the scientific community, this application would allow a scientist to view a snapshot of the state of their particular experiment at any time or place. Because the web application renders the displays that can currently be created with the POIC ground system, the user can tailor their displays for a particular device using tools that they are already trained to use.

  11. The One Universal Graph — a free and open graph database

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Champion, Corbin

    2016-02-01

    Recent developments in graph database mostly are huge projects involving big organizations, big operations and big capital, as the name Big Data attests. We proposed the concept of One Universal Graph (OUG) which states that all observable and known objects and concepts (physical, conceptual or digitally represented) can be connected with only one single graph; furthermore the OUG can be implemented with a very simple text file format with free software, capable of being executed on Android or smaller devices. As such the One Universal Graph Data Exchange (GOUDEX) modules can potentially be installed on hundreds of millions of Android devices and Intel compatible computers shipped annually. Coupled with its open nature and ability to connect to existing leading search engines and databases currently in operation, GOUDEX has the potential to become the largest and a better interface for users and programmers to interact with the data on the Internet. With a Web User Interface for users to use and program in native Linux environment, Free Crowdware implemented in GOUDEX can help inexperienced users learn programming with better organized documentation for free software, and is able to manage programmer's contribution down to a single line of code or a single variable in software projects. It can become the first practically realizable “Internet brain” on which a global artificial intelligence system can be implemented. Being practically free and open, One Universal Graph can have significant applications in robotics, artificial intelligence as well as social networks.

  12. Beyond intuitive anthropomorphic control: recent achievements using brain computer interface technologies

    NASA Astrophysics Data System (ADS)

    Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco

    2017-05-01

    Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.

  13. Navigating the fifth dimension: new concepts in interactive multimodality and multidimensional image navigation

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes

    2005-04-01

    Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.

  14. Designing User Interfaces for Smart-Applications for Operating Rooms and Intensive Care Units

    NASA Astrophysics Data System (ADS)

    Kindsmüller, Martin Christof; Haar, Maral; Schulz, Hannes; Herczeg, Michael

    Today’s physicians and nurses working in operating rooms and intensive care units have to deal with an ever increasing amount of data. More and more medical devices are delivering information, which has to be perceived and interpreted in regard to patient status and the necessity to adjust therapy. The combination of high information load and insufficient usability creates a severe challenge for the health personnel with respect to proper monitoring of these devices respective to acknowledging alarms and timely reaction to critical incidents. Smart Applications are a new kind of decision support systems that incorporate medical expertise in order to help health personnel in regard to diagnosis and therapy. By means of a User Centered Design process of two Smart Applications (anaesthesia monitor display, diagnosis display), we illustrate which approach should be followed and which processes and methods have been successfully applied in fostering the design of usable medical devices.

  15. Development of a data acquisition system using a RISC/UNIX TM workstation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Y.; Tanimori, T.; Yasu, Y.

    1993-05-01

    We have developed a compact data acquisition system on RISC/UNIX workstations. A SUN TM SPARCstation TM IPC was used, in which an extension bus "SBus TM" was linked to a VMEbus. The transfer rate achieved was better than 7 Mbyte/s between the VMEbus and the SUN. A device driver for CAMAC was developed in order to realize an interruptive feature in UNIX. In addition, list processing has been incorporated in order to keep the high priority of the data handling process in UNIX. The successful developments of both device driver and list processing have made it possible to realize the good real-time feature on the RISC/UNIX system. Based on this architecture, a portable and versatile data taking system has been developed, which consists of a graphical user interface, I/O handler, user analysis process, process manager and a CAMAC device driver.

  16. Speckle-based portable device for in-situ metrology of x-ray mirrors at Diamond Light Source

    NASA Astrophysics Data System (ADS)

    Wang, Hongchang; Kashyap, Yogesh; Zhou, Tunhe; Sawhney, Kawal

    2017-09-01

    For modern synchrotron light sources, the push toward diffraction-limited and coherence-preserved beams demands accurate metrology on X-ray optics. Moreover, it is important to perform in-situ characterization and optimization of X-ray mirrors since their ultimate performance is critically dependent on the working conditions. Therefore, it is highly desirable to develop a portable metrology device, which can be easily implemented on a range of beamlines for in-situ metrology. An X-ray speckle-based portable device for in-situ metrology of synchrotron X-ray mirrors has been developed at Diamond Light Source. Ultra-high angular sensitivity is achieved by scanning the speckle generator in the X-ray beam. In addition to the compact setup and ease of implementation, a user-friendly graphical user interface has been developed to ensure that characterization and alignment of X-ray mirrors is simple and fast. The functionality and feasibility of this device is presented with representative examples.

  17. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  18. Comparing Text-based and Graphic User Interfaces for Novice and Expert Users

    PubMed Central

    Chen, Jung-Wei; Zhang, Jiajie

    2007-01-01

    Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI. PMID:18693811

  19. Comparing Text-based and Graphic User Interfaces for novice and expert users.

    PubMed

    Chen, Jung-Wei; Zhang, Jiajie

    2007-10-11

    Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI.

  20. Closed-loop dialog model of face-to-face communication with a photo-real virtual human

    NASA Astrophysics Data System (ADS)

    Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás

    2004-01-01

    We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.

  1. Bringing Control System User Interfaces to the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xihui; Kasemir, Kay

    With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less

  2. Nurses' perceptions and problems in the usability of a medication safety app.

    PubMed

    Ankem, Kalyani; Cho, Sookyung; Simpson, Diana

    2017-10-16

    The majority of medication apps support medication adherence. Equally, if not more important, is medication safety. Few apps report on medication safety, and fewer studies have been conducted with these apps. The usability of a medication safety app was tested with nurses to reveal their perceptions of the graphical user interface and to discover problems they encountered in using the app. Usability testing of the app was conducted with RN-BSN students and informatics students (n = 18). Perceptions of the graphical components were gathered in pretest and posttest questionnaires, and video recordings of the usability testing were transcribed. The significance of the difference in mean performance time for 8 tasks was tested, and qualitative analysis was deployed to identify problems encountered and to rate the severity of each problem. While all participants perceived the graphical user interface as easy to understand, nurses took significantly more time to complete certain tasks. More nurses found the medication app to be lacking in intuitiveness of user interface design, in capability to match real-world data, and in providing optimal information architecture. To successfully integrate mobile devices in healthcare, developers must address the problems that nurses encountered in use of the app.

  3. Nuclear data made easily accessible through the Notre Dame Nuclear Database

    NASA Astrophysics Data System (ADS)

    Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.

  4. Decoding static and dynamic arm and hand gestures from the JPL BioSleeve

    NASA Astrophysics Data System (ADS)

    Wolf, M. T.; Assad, C.; Stoica, A.; You, Kisung; Jethani, H.; Vernacchia, M. T.; Fromm, J.; Iwashita, Y.

    This paper presents methods for inferring arm and hand gestures from forearm surface electromyography (EMG) sensors and an inertial measurement unit (IMU). These sensors, together with their electronics, are packaged in an easily donned device, termed the BioSleeve, worn on the forearm. The gestures decoded from BioSleeve signals can provide natural user interface commands to computers and robots, without encumbering the users hands and without problems that hinder camera-based systems. Potential aerospace applications for this technology include gesture-based crew-autonomy interfaces, high degree of freedom robot teleoperation, and astronauts' control of power-assisted gloves during extra-vehicular activity (EVA). We have developed techniques to interpret both static (stationary) and dynamic (time-varying) gestures from the BioSleeve signals, enabling a diverse and adaptable command library. For static gestures, we achieved over 96% accuracy on 17 gestures and nearly 100% accuracy on 11 gestures, based solely on EMG signals. Nine dynamic gestures were decoded with an accuracy of 99%. This combination of wearableEMGand IMU hardware and accurate algorithms for decoding both static and dynamic gestures thus shows promise for natural user interface applications.

  5. VIEW-Station software and its graphical user interface

    NASA Astrophysics Data System (ADS)

    Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki

    1992-04-01

    VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.

  6. Preliminary results of BRAVO project: brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks.

    PubMed

    Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo

    2011-01-01

    This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE

  7. Wireless photoplethysmographic device for heart rate variability signal acquisition and analysis.

    PubMed

    Reyes, Ivan; Nazeran, Homer; Franco, Mario; Haltiwanger, Emily

    2012-01-01

    The photoplethysmographic (PPG) signal has the potential to aid in the acquisition and analysis of heart rate variability (HRV) signal: a non-invasive quantitative marker of the autonomic nervous system that could be used to assess cardiac health and other physiologic conditions. A low-power wireless PPG device was custom-developed to monitor, acquire and analyze the arterial pulse in the finger. The system consisted of an optical sensor to detect arterial pulse as variations in reflected light intensity, signal conditioning circuitry to process the reflected light signal, a microcontroller to control PPG signal acquisition, digitization and wireless transmission, a receiver to collect the transmitted digital data and convert them back to their analog representations. A personal computer was used to further process the captured PPG signals and display them. A MATLAB program was then developed to capture the PPG data, detect the RR peaks, perform spectral analysis of the PPG data, and extract the HRV signal. A user-friendly graphical user interface (GUI) was developed in LabView to display the PPG data and their spectra. The performance of each module (sensing unit, signal conditioning, wireless transmission/reception units, and graphical user interface) was assessed individually and the device was then tested as a whole. Consequently, PPG data were obtained from five healthy individuals to test the utility of the wireless system. The device was able to reliably acquire the PPG signals from the volunteers. To validate the accuracy of the MATLAB codes, RR peak information from each subject was fed into Kubios software as a text file. Kubios was able to generate a report sheet with the time domain and frequency domain parameters of the acquired data. These features were then compared against those calculated by MATLAB. The preliminary results demonstrate that the prototype wireless device could be used to perform HRV signal acquisition and analysis.

  8. A Flexible Microcontroller-Based Data Acquisition Device

    PubMed Central

    Hercog, Darko; Gergič, Bojan

    2014-01-01

    This paper presents a low-cost microcontroller-based data acquisition device. The key component of the presented solution is a configurable microcontroller-based device with an integrated USB transceiver and a 12-bit analogue-to-digital converter (ADC). The presented embedded DAQ device contains a preloaded program (firmware) that enables easy acquisition and generation of analogue and digital signals and data transfer between the device and the application running on a PC via USB bus. This device has been developed as a USB human interface device (HID). This USB class is natively supported by most of the operating systems and therefore any installation of additional USB drivers is unnecessary. The input/output peripheral of the presented device is not static but rather flexible, and could be easily configured to customised needs without changing the firmware. When using the developed configuration utility, a majority of chip pins can be configured as analogue input, digital input/output, PWM output or one of the SPI lines. In addition, LabVIEW drivers have been developed for this device. When using the developed drivers, data acquisition and signal processing algorithms as well as graphical user interface (GUI), can easily be developed using a well-known, industry proven, block oriented LabVIEW programming environment. PMID:24892494

  9. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  10. Network and user interface for PAT DOME virtual motion environment system

    NASA Technical Reports Server (NTRS)

    Worthington, J. W.; Duncan, K. M.; Crosier, W. G.

    1993-01-01

    The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) provides astronauts a virtual microgravity sensory environment designed to help alleviate tye symptoms of space motion sickness (SMS). The system consists of four microcomputers networked to provide real time control, and an image generator (IG) driving a wide angle video display inside a dome structure. The spherical display demands distortion correction. The system is currently being modified with a new graphical user interface (GUI) and a new Silicon Graphics IG. This paper will concentrate on the new GUI and the networking scheme. The new GUI eliminates proprietary graphics hardware and software, and instead makes use of standard and low cost PC video (CGA) and off the shelf software (Microsoft's Quick C). Mouse selection for user input is supported. The new Silicon Graphics IG requires an Ethernet interface. The microcomputer known as the Real Time Controller (RTC), which has overall control of the system and is written in Ada, was modified to use the free public domain NCSA Telnet software for Ethernet communications with the Silicon Graphics IG. The RTC also maintains the original ARCNET communications through Novell Netware IPX with the rest of the system. The Telnet TCP/IP protocol was first used for real-time communication, but because of buffering problems the Telnet datagram (UDP) protocol needed to be implemented. Since the Telnet modules are written in C, the Adap pragma 'Interface' was used to interface with the network calls.

  11. Haptic interfaces: Hardware, software and human performance

    NASA Technical Reports Server (NTRS)

    Srinivasan, Mandayam A.

    1995-01-01

    Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.

  12. NREL’s Controllable Grid Interface Saves Time and Resources, Improves Reliability of Renewable Energy Technologies; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The National Renewable Energy Laboratory's (NREL) controllable grid interface (CGI) test system at the National Wind Technology Center (NWTC) is one of two user facilities at NREL capable of testing and analyzing the integration of megawatt-scale renewable energy systems. The CGI specializes in testing of multimegawatt-scale wind and photovoltaic (PV) technologies as well as energy storage devices, transformers, control and protection equipment at medium-voltage levels, allowing the determination of the grid impacts of the tested technology.

  13. Neural networks for simultaneous classification and parameter estimation in musical instrument control

    NASA Astrophysics Data System (ADS)

    Lee, Michael; Freed, Adrian; Wessel, David

    1992-08-01

    In this report we present our tools for prototyping adaptive user interfaces in the context of real-time musical instrument control. Characteristic of most human communication is the simultaneous use of classified events and estimated parameters. We have integrated a neural network object into the MAX language to explore adaptive user interfaces that considers these facets of human communication. By placing the neural processing in the context of a flexible real-time musical programming environment, we can rapidly prototype experiments on applications of adaptive interfaces and learning systems to musical problems. We have trained networks to recognize gestures from a Mathews radio baton, Nintendo Power GloveTM, and MIDI keyboard gestural input devices. In one experiment, a network successfully extracted classification and attribute data from gestural contours transduced by a continuous space controller, suggesting their application in the interpretation of conducting gestures and musical instrument control. We discuss network architectures, low-level features extracted for the networks to operate on, training methods, and musical applications of adaptive techniques.

  14. Wireless sEMG-Based Body-Machine Interface for Assistive Technology Devices.

    PubMed

    Fall, Cheikh Latyr; Gagnon-Turcotte, Gabriel; Dube, Jean-Francois; Gagne, Jean Simon; Delisle, Yanick; Campeau-Lecours, Alexandre; Gosselin, Clement; Gosselin, Benoit

    2017-07-01

    Assistive technology (AT) tools and appliances are being more and more widely used and developed worldwide to improve the autonomy of people living with disabilities and ease the interaction with their environment. This paper describes an intuitive and wireless surface electromyography (sEMG) based body-machine interface for AT tools. Spinal cord injuries at C5-C8 levels affect patients' arms, forearms, hands, and fingers control. Thus, using classical AT control interfaces (keypads, joysticks, etc.) is often difficult or impossible. The proposed system reads the AT users' residual functional capacities through their sEMG activity, and converts them into appropriate commands using a threshold-based control algorithm. It has proven to be suitable as a control alternative for assistive devices and has been tested with the JACO arm, an articulated assistive device of which the vocation is to help people living with upper-body disabilities in their daily life activities. The wireless prototype, the architecture of which is based on a 3-channel sEMG measurement system and a 915-MHz wireless transceiver built around a low-power microcontroller, uses low-cost off-the-shelf commercial components. The embedded controller is compared with JACO's regular joystick-based interface, using combinations of forearm, pectoral, masseter, and trapeze muscles. The measured index of performance values is 0.88, 0.51, and 0.41 bits/s, respectively, for correlation coefficients with the Fitt's model of 0.75, 0.85, and 0.67. These results demonstrate that the proposed controller offers an attractive alternative to conventional interfaces, such as joystick devices, for upper-body disabled people using ATs such as JACO.

  15. Adaptive Phase Delay Generator

    NASA Technical Reports Server (NTRS)

    Greer, Lawrence

    2013-01-01

    There are several experimental setups involving rotating machinery that require some form of synchronization. The adaptive phase delay generator (APDG) the Bencic-1000 is a flexible instrument that allows the user to generate pulses synchronized to the rising edge of a tachometer signal from any piece of rotating machinery. These synchronized pulses can vary by the delay angle, pulse width, number of pulses per period, number of skipped pulses, and total number of pulses. Due to the design of the pulse generator, any and all of these parameters can be changed independently, yielding an unparalleled level of versatility. There are two user interfaces to the APDG. The first is a LabVIEW program that has the advantage of displaying all of the pulse parameters and input signal data within one neatly organized window on the PC monitor. Furthermore, the LabVIEW interface plots the rpm of the two input signal channels in real time. The second user interface is a handheld portable device that goes anywhere a computer is not accessible. It consists of a liquid-crystal display and keypad, which enable the user to control the unit by scrolling through a host of command menus and parameter listings. The APDG combines all of the desired synchronization control into one unit. The experimenter can adjust the delay, pulse width, pulse count, number of skipped pulses, and produce a specified number of pulses per revolution. Each of these parameters can be changed independently, providing an unparalleled level of versatility when synchronizing hardware to a host of rotating machinery. The APDG allows experimenters to set up quickly and generate a host of synchronizing configurations using a simple user interface, which hopefully leads to faster results.

  16. Research in image management and access

    NASA Technical Reports Server (NTRS)

    Vondran, Raymond F.; Barron, Billy J.

    1993-01-01

    Presently, the problem of over-all library system design has been compounded by the accretion of both function and structure to a basic framework of requirements. While more device power has led to increased functionality, opportunities for reducing system complexity at the user interface level have not always been pursued with equal zeal. The purpose of this book is therefore to set forth and examine these opportunities, within the general framework of human factors research in man-machine interfaces. Human factors may be viewed as a series of trade-off decisions among four polarized objectives: machine resources and user specifications; functionality and user requirements. In the past, a limiting factor was the availability of systems. However, in the last two years, over one hundred libraries supported by many different software configurations have been added to the Internet. This document includes a statistical analysis of human responses to five Internet library systems by key features, development of the ideal online catalog system, and ideal online catalog systems for libraries and information centers.

  17. Interacting with a security system: The Argus user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrin, E.; Davis, G.E.

    1993-12-31

    In the mid-1980s the Lawrence Livermore National Laboratory (LLNL) developed the Argus Security System. Key requirements were to eliminate the telephone as a verification device for opening and closing alarm stations and to allow need-to-know access through local enrollment at alarm stations. Resulting from these requirements was an LLNL-designed user interface called the Remote Access Panel (RAP). The Argus RAP interacts with Argus field processors to allow secure station mode changes and local station enrollment, provides user direction and response, and assists station maintenance personnel. It consists of a tamper-detecting housing containing a badge reader, a keypad with sight screen,more » special-purpose push buttons and a liquid-crystal display. This paper discusses Argus system concepts, RAP design, functional characteristics and its physical configurations. The paper also describes the RAP`s use in access-control booths, it`s integration with biometrics and its operation for multi-person-rule stations and compartmented facilities.« less

  18. Intuitive wireless control of a robotic arm for people living with an upper body disability.

    PubMed

    Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B

    2015-08-01

    Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.

  19. Flexible Peripheral Component Interconnect Input/Output Card

    NASA Technical Reports Server (NTRS)

    Bigelow, Kirk K.; Jerry, Albert L.; Baricio, Alisha G.; Cummings, Jon K.

    2010-01-01

    The Flexible Peripheral Component Interconnect (PCI) Input/Output (I/O) Card is an innovative circuit board that provides functionality to interface between a variety of devices. It supports user-defined interrupts for interface synchronization, tracks system faults and failures, and includes checksum and parity evaluation of interface data. The card supports up to 16 channels of high-speed, half-duplex, low-voltage digital signaling (LVDS) serial data, and can interface combinations of serial and parallel devices. Placement of a processor within the field programmable gate array (FPGA) controls an embedded application with links to host memory over its PCI bus. The FPGA also provides protocol stacking and quick digital signal processor (DSP) functions to improve host performance. Hardware timers, counters, state machines, and other glue logic support interface communications. The Flexible PCI I/O Card provides an interface for a variety of dissimilar computer systems, featuring direct memory access functionality. The card has the following attributes: 8/16/32-bit, 33-MHz PCI r2.2 compliance, Configurable for universal 3.3V/5V interface slots, PCI interface based on PLX Technology's PCI9056 ASIC, General-use 512K 16 SDRAM memory, General-use 1M 16 Flash memory, FPGA with 3K to 56K logical cells with embedded 27K to 198K bits RAM, I/O interface: 32-channel LVDS differential transceivers configured in eight, 4-bit banks; signaling rates to 200 MHz per channel, Common SCSI-3, 68-pin interface connector.

  20. An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger

    PubMed Central

    Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico

    2016-01-01

    In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088

  1. A web-based biosignal data management system for U-health data integration.

    PubMed

    Ro, Dongwoo; Yoo, Sooyoung; Choi, Jinwook

    2008-11-06

    In the ubiquitous healthcare environment, the biosignal data should be easily accessed and properly maintained. This paper describes a web-based data management system. It consists of a device interface, a data upload control, a central repository, and a web server. For the user-specific web services, a MFER Upload ActiveX Control was developed.

  2. Need low-cost networking? Consider DeviceNet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, W.H.

    1996-11-01

    The drive to reduce production costs and optimize system performance in manufacturing facilities causes many end users to invest in network solutions. Because of distinct differences between the way tasks are performed and the way data are handled for various applications, it is clear than more than one network will be needed in most facilities. What is not clear is which network is most appropriate for a given application. The information layer is the link between automation and information environments via management information systems (MISs) and manufacturing execution systems (MESs) and manufacturing execution systems (MESs). Here the market has chosenmore » a de facto standard in Ethernet, primarily transmission control protocol/internet protocol (TCP/IP) and secondarily manufacturing messaging system (MMS). There is no single standard at the device layer. However, the DeviceNet communication standard has made strides to reach this goal. This protocol eliminates expensive hardwiring and provides improved communication between devices and important device-level diagnostics not easily accessible or available through hardwired I/O interfaces. DeviceNet is a low-cost communications link connecting industrial devices to a network. Many original equipment manufacturers and end users have chosen the DeviceNet platform for several reasons, but most frequently because of four key features: interchangeability; low cost; advanced diagnostics; insert devices under power.« less

  3. Using Patient Feedback to Optimize the Design of a Certolizumab Pegol Electromechanical Self-Injection Device: Insights from Human Factors Studies.

    PubMed

    Domańska, Barbara; Stumpp, Oliver; Poon, Steven; Oray, Serkan; Mountian, Irina; Pichon, Clovis

    2018-01-01

    We incorporated patient feedback from human factors studies (HFS) in the patient-centric design and validation of ava ® , an electromechanical device (e-Device) for self-injecting the anti-tumor necrosis factor certolizumab pegol (CZP). Healthcare professionals, caregivers, healthy volunteers, and patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, or Crohn's disease participated in 11 formative HFS to optimize the e-Device design through intended user feedback; nine studies involved simulated injections. Formative participant questionnaire feedback was collected following e-Device prototype handling. Validation HFS (one EU study and one US study) assessed the safe and effective setup and use of the e-Device using 22 predefined critical tasks. Task outcomes were categorized as "failures" if participants did not succeed within three attempts. Two hundred eighty-three participants entered formative (163) and validation (120) HFS; 260 participants performed one or more simulated e-Device self-injections. Design changes following formative HFS included alterations to buttons and the graphical user interface screen. All validation HFS participants completed critical tasks necessary for CZP dose delivery, with minimal critical task failures (12 of 572 critical tasks, 2.1%, in the EU study, and 2 of 5310 critical tasks, less than 0.1%, in the US study). CZP e-Device development was guided by intended user feedback through HFS, ensuring the final design addressed patients' needs. In both validation studies, participants successfully performed all critical tasks, demonstrating safe and effective e-Device self-injections. UCB Pharma. Plain language summary available on the journal website.

  4. Design of a pulse oximeter for price sensitive emerging markets.

    PubMed

    Jones, Z; Woods, E; Nielson, D; Mahadevan, S V

    2010-01-01

    While the global market for medical devices is located primarily in developed countries, price sensitive emerging markets comprise an attractive, underserved segment in which products need a unique set of value propositions to be competitive. A pulse oximeter was designed expressly for emerging markets, and a novel feature set was implemented to reduce the cost of ownership and improve the usability of the device. Innovations included the ability of the device to generate its own electricity, a built in sensor which cuts down on operating costs, and a graphical, symbolic user interface. These features yield an average reduction of over 75% in the device cost of ownership versus comparable pulse oximeters already on the market.

  5. User interface for a tele-operated robotic hand system

    DOEpatents

    Crawford, Anthony L

    2015-03-24

    Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.

  6. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  7. Literature Review on Needs of Upper Limb Prosthesis Users.

    PubMed

    Cordella, Francesca; Ciancio, Anna Lisa; Sacchetti, Rinaldo; Davalli, Angelo; Cutti, Andrea Giovanni; Guglielmelli, Eugenio; Zollo, Loredana

    2016-01-01

    The loss of one hand can significantly affect the level of autonomy and the capability of performing daily living, working and social activities. The current prosthetic solutions contribute in a poor way to overcome these problems due to limitations in the interfaces adopted for controlling the prosthesis and to the lack of force or tactile feedback, thus limiting hand grasp capabilities. This paper presents a literature review on needs analysis of upper limb prosthesis users, and points out the main critical aspects of the current prosthetic solutions, in terms of users satisfaction and activities of daily living they would like to perform with the prosthetic device. The ultimate goal is to provide design inputs in the prosthetic field and, contemporary, increase user satisfaction rates and reduce device abandonment. A list of requirements for upper limb prostheses is proposed, grounded on the performed analysis on user needs. It wants to (i) provide guidelines for improving the level of acceptability and usefulness of the prosthesis, by accounting for hand functional and technical aspects; (ii) propose a control architecture of PNS-based prosthetic systems able to satisfy the analyzed user wishes; (iii) provide hints for improving the quality of the methods (e.g., questionnaires) adopted for understanding the user satisfaction with their prostheses.

  8. Literature Review on Needs of Upper Limb Prosthesis Users

    PubMed Central

    Cordella, Francesca; Ciancio, Anna Lisa; Sacchetti, Rinaldo; Davalli, Angelo; Cutti, Andrea Giovanni; Guglielmelli, Eugenio; Zollo, Loredana

    2016-01-01

    The loss of one hand can significantly affect the level of autonomy and the capability of performing daily living, working and social activities. The current prosthetic solutions contribute in a poor way to overcome these problems due to limitations in the interfaces adopted for controlling the prosthesis and to the lack of force or tactile feedback, thus limiting hand grasp capabilities. This paper presents a literature review on needs analysis of upper limb prosthesis users, and points out the main critical aspects of the current prosthetic solutions, in terms of users satisfaction and activities of daily living they would like to perform with the prosthetic device. The ultimate goal is to provide design inputs in the prosthetic field and, contemporary, increase user satisfaction rates and reduce device abandonment. A list of requirements for upper limb prostheses is proposed, grounded on the performed analysis on user needs. It wants to (i) provide guidelines for improving the level of acceptability and usefulness of the prosthesis, by accounting for hand functional and technical aspects; (ii) propose a control architecture of PNS-based prosthetic systems able to satisfy the analyzed user wishes; (iii) provide hints for improving the quality of the methods (e.g., questionnaires) adopted for understanding the user satisfaction with their prostheses. PMID:27242413

  9. Tactile Data Entry System

    NASA Technical Reports Server (NTRS)

    Adams, Richard J.

    2015-01-01

    The patent-pending Glove-Enabled Computer Operations (GECO) design leverages extravehicular activity (EVA) glove design features as platforms for instrumentation and tactile feedback, enabling the gloves to function as human-computer interface devices. Flexible sensors in each finger enable control inputs that can be mapped to any number of functions (e.g., a mouse click, a keyboard strike, or a button press). Tracking of hand motion is interpreted alternatively as movement of a mouse (change in cursor position on a graphical user interface) or a change in hand position on a virtual keyboard. Programmable vibro-tactile actuators aligned with each finger enrich the interface by creating the haptic sensations associated with control inputs, such as recoil of a button press.

  10. Multimodal audio guide for museums and exhibitions

    NASA Astrophysics Data System (ADS)

    Gebbensleben, Sandra; Dittmann, Jana; Vielhauer, Claus

    2006-02-01

    In our paper we introduce a new Audio Guide concept for exploring buildings, realms and exhibitions. Actual proposed solutions work in most cases with pre-defined devices, which users have to buy or borrow. These systems often go along with complex technical installations and require a great degree of user training for device handling. Furthermore, the activation of audio commentary related to the exhibition objects is typically based on additional components like infrared, radio frequency or GPS technology. Beside the necessity of installation of specific devices for user location, these approaches often only support automatic activation with no or limited user interaction. Therefore, elaboration of alternative concepts appears worthwhile. Motivated by these aspects, we introduce a new concept based on usage of the visitor's own mobile smart phone. The advantages in our approach are twofold: firstly the Audio Guide can be used in various places without any purchase and extensive installation of additional components in or around the exhibition object. Secondly, the visitors can experience the exhibition on individual tours only by uploading the Audio Guide at a single point of entry, the Audio Guide Service Counter, and keeping it on her or his personal device. Furthermore, since the user usually is quite familiar with the interface of her or his phone and can thus interact with the application device easily. Our technical concept makes use of two general ideas for location detection and activation. Firstly, we suggest an enhanced interactive number based activation by exploiting the visual capabilities of modern smart phones and secondly we outline an active digital audio watermarking approach, where information about objects are transmitted via an analog audio channel.

  11. Development of a wireless blood pressure measuring device with smart mobile device.

    PubMed

    İlhan, İlhan; Yıldız, İbrahim; Kayrak, Mehmet

    2016-03-01

    Today, smart mobile devices (telephones and tablets) are very commonly used due to their powerful hardware and useful features. According to an eMarketer report, in 2014 there were 1.76 billion smartphone users (excluding users of tablets) in the world; it is predicted that this number will rise by 15.9% to 2.04 billion in 2015. It is thought that these devices can be used successfully in biomedical applications. A wireless blood pressure measuring device used together with a smart mobile device was developed in this study. By means of an interface developed for smart mobile devices with Android and iOS operating systems, a smart mobile device was used both as an indicator and as a control device. The cuff communicating with this device through Bluetooth was designed to measure blood pressure via the arm. A digital filter was used on the cuff instead of the traditional analog signal processing and filtering circuit. The newly developed blood pressure measuring device was tested on 18 patients and 20 healthy individuals of different ages under a physician's supervision. When the test results were compared with the measurements made using a sphygmomanometer, it was shown that an average 93.52% accuracy in sick individuals and 94.53% accuracy in healthy individuals could be achieved with the new device. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. The role of assistive robotics in the lives of persons with disability.

    PubMed

    Brose, Steven W; Weber, Douglas J; Salatin, Ben A; Grindle, Garret G; Wang, Hongwu; Vazquez, Juan J; Cooper, Rory A

    2010-06-01

    Robotic assistive devices are used increasingly to improve the independence and quality of life of persons with disabilities. Devices as varied as robotic feeders, smart-powered wheelchairs, independent mobile robots, and socially assistive robots are becoming more clinically relevant. There is a growing importance for the rehabilitation professional to be aware of available systems and ongoing research efforts. The aim of this article is to describe the advances in assistive robotics that are relevant to professionals serving persons with disabilities. This review breaks down relevant advances into categories of Assistive Robotic Systems, User Interfaces and Control Systems, Sensory and Feedback Systems, and User Perspectives. An understanding of the direction that assistive robotics is taking is important for the clinician and researcher alike; this review is intended to address this need.

  13. Assistive device with conventional, alternative, and brain-computer interface inputs to enhance interaction with the environment for people with amyotrophic lateral sclerosis: a feasibility and usability study.

    PubMed

    Schettini, Francesca; Riccio, Angela; Simione, Luca; Liberati, Giulia; Caruso, Mario; Frasca, Vittorio; Calabrese, Barbara; Mecella, Massimo; Pizzimenti, Alessia; Inghilleri, Maurizio; Mattia, Donatella; Cincotti, Febo

    2015-03-01

    To evaluate the feasibility and usability of an assistive technology (AT) prototype designed to be operated with conventional/alternative input channels and a P300-based brain-computer interface (BCI) in order to provide users who have different degrees of muscular impairment resulting from amyotrophic lateral sclerosis (ALS) with communication and environmental control applications. Proof-of-principle study with a convenience sample. An apartment-like space designed to be fully accessible by people with motor disabilities for occupational therapy, placed in a neurologic rehabilitation hospital. End-users with ALS (N=8; 5 men, 3 women; mean age ± SD, 60 ± 12 y) recruited by a clinical team from an ALS center. Three experimental conditions based on (1) a widely validated P300-based BCI alone; (2) the AT prototype operated by a conventional/alternative input device tailored to the specific end-user's residual motor abilities; and (3) the AT prototype accessed by a P300-based BCI. These 3 conditions were presented to all participants in 3 different sessions. System usability was evaluated in terms of effectiveness (accuracy), efficiency (written symbol rate, time for correct selection, workload), and end-user satisfaction (overall satisfaction) domains. A comparison of the data collected in the 3 conditions was performed. Effectiveness and end-user satisfaction did not significantly differ among the 3 experimental conditions. Condition III was less efficient than condition II as expressed by the longer time for correct selection. A BCI can be used as an input channel to access an AT by persons with ALS, with no significant reduction of usability. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peffer, Therese; Blumstein, Carl; Culler, David

    The Project uses state-of-the-art computer science to extend the benefits of Building Automation Systems (BAS) typically found in large buildings (>100,000 square foot) to medium-sized commercial buildings (<50,000 sq ft). The BAS developed in this project, termed OpenBAS, uses an open-source and open software architecture platform, user interface, and plug-and-play control devices to facilitate adoption of energy efficiency strategies in the commercial building sector throughout the United States. At the heart of this “turn key” BAS is the platform with three types of controllers—thermostat, lighting controller, and general controller—that are easily “discovered” by the platform in a plug-and-play fashion. Themore » user interface showcases the platform and provides the control system set-up, system status display and means of automatically mapping the control points in the system.« less

  15. SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert

    "SMART Ver. 0.8 Beta" provides a system developer with software tools to create a telerobotic control system, i.e., a system whereby an end-user can interact with mechatronic equipment. It consists of three main components: the SMART Editor (tsmed), the SMART Real-time kernel (rtos), and the SMART Supervisor (gui). The SMART Editor is a graphical icon-based code generation tool for creating end-user systems, given descriptions of SMART modules. The SMART real-time kernel implements behaviors that combine modules representing input devices, sensors, constraints, filters, and robotic devices. Included with this software release is a number of core modules, which can be combinedmore » with additional project and device specific modules to create a telerobotic controller. The SMART Supervisor is a graphical front-end for running a SMART system. It is an optional component of the SMART Environment and utilizes the TeVTk windowing and scripting environment. Although the code contained within this release is complete, and can be utilized for defining, running, and interfacing to a sample end-user SMART system, most systems will include additional project and hardware specific modules developed either by the system developer or obtained independently from a SMART module developer. SMART is a software system designed to integrate the different robots, input devices, sensors and dynamic elements required for advanced modes of telerobotic control. "SMART Ver. 0.8 Beta" defines and implements a telerobotic controller. A telerobotic system consists of combinations of modules that implement behaviors. Each real-time module represents an input device, robot device, sensor, constraint, connection or filter. The underlying theory utilizes non-linear discretized multidimensional network elements to model each individual module, and guarantees that upon a valid connection, the resulting system will perform in a stable fashion. Different combinations of modules implement different behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less

  16. An energy-efficient architecture for internet of things systems

    NASA Astrophysics Data System (ADS)

    De Rango, Floriano; Barletta, Domenico; Imbrogno, Alessandro

    2016-05-01

    In this paper some of the motivations for energy-efficient communications in wireless systems are described by highlighting emerging trends and identifying some challenges that need to be addressed to enable novel, scalable and energy-efficient communications. So an architecture for Internet of Things systems is presented, the purpose of which is to minimize energy consumption by communication devices, protocols, networks, end-user systems and data centers. Some electrical devices have been designed with multiple communication interfaces, such as RF or WiFi, using open source technology; they have been analyzed under different working conditions. Some devices are programmed to communicate directly with a web server, others to communicate only with a special device that acts as a bridge between some devices and the web server. Communication parameters and device status have been changed dynamically according to different scenarios in order to have the most benefits in terms of energy cost and battery lifetime. So the way devices communicate with the web server or between each other and the way they try to obtain the information they need to be always up to date change dynamically in order to guarantee always the lowest energy consumption, a long lasting battery lifetime, the fastest responses and feedbacks and the best quality of service and communication for end users and inner devices of the system.

  17. Trace saver: A tool for network service improvement and personalised analysis of user centric statistics

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Asfand-e-Yar, Mockford, Steve; Khan, Wasiq; Awan, Irfan

    2012-11-01

    Mobile technology is among the fastest growing technologies in today's world with low cost and highly effective benefits. Most important and entertaining areas in mobile technology development and usage are location based services, user friendly networked applications and gaming applications. However, concern towards network operator service provision and improvement has been very low. The portable applications available for a range of mobile operating systems which help improve the network operator services are desirable by the mobile operators. This paper proposes a state of the art mobile application Tracesaver, which provides a great achievement over the barriers in gathering device and network related information, for network operators to improve their network service provision. Tracesaver is available for a broad range of mobile devices with different mobile operating systems and computational capabilities. The availability of Tracesaver in market has proliferated over the last year since it was published. The survey and results show that Tracesaver is being used by millions of mobile users and provides novel ways of network service improvement with its highly user friendly interface.

  18. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  19. TexiCare: an innovative embedded device for pressure ulcer prevention. Preliminary results with a paraplegic volunteer.

    PubMed

    Chenu, Olivier; Vuillerme, Nicolas; Bucki, Marek; Diot, Bruno; Cannard, Francis; Payan, Yohan

    2013-08-01

    This paper introduces the recently developed TexiCare device that aims at preventing pressure ulcers for people with spinal cord injury. This embedded device is aimed to be mounted on the user wheelchair. Its sensor is 100% textile and allows the measurement of pressures at the interface between the cushion and the buttocks. It is comfortable, washable and low cost. It is connected to a cigarette-box sized unit that (i) measures the pressures in real time, (ii) estimates the risk for internal over-strains, and (iii) alerts the wheelchair user whenever necessary. The alert method has been defined as a result of a utility/usability/acceptability study conducted with representative end users. It is based on a tactile-visual feedback (via a watch or a smartphone for example): the tactile modality is used to discreetly alarm the person while the visual modality conveys an informative message. In order to evaluate the usability of the TexiCare device, a paraplegic volunteer equipped his wheelchair at home during a six months period. Interestingly, the first results revealed bad habits such as an inadequate posture when watching TV, rare relief maneuvers, and the occurrence of abnormal high pressures. Copyright © 2013 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.

  20. Femur-mounted navigation system for the arthroscopic treatment of femoroacetabular impingement

    NASA Astrophysics Data System (ADS)

    Park, S. H.; Hwang, D. S.; Yoon, Y. S.

    2013-07-01

    Femoroacetabular impingement stems from an abnormal shape of the acetabulum and proximal femur. It is treated by resection of damaged soft tissue and by the shaping of bone to resemble normal features. The arthroscopic treatment of femoroacetabular impingement has many advantages, including minimal incisions, rapid recovery, and less pain. However, in some cases, revision is needed owing to the insufficient resection of damaged bone from a misreading of the surgical site. The limited view of arthroscopy is the major reason for the complications. In this research, a navigation method for the arthroscopic treatment of femoroacetabular impingement is developed. The proposed navigation system consists of femur attachable measurement device and user interface. The bone mounted measurement devices measure points on head-neck junction for registration and position of surgical instrument. User interface shows the three-dimensional model of patient's femur and surgical instrument position that is tracked by measurement device. Surgeon can know the three-dimensional anatomical structure of hip joint and surgical instrument position on surgical site using navigation system. Surface registration was used to obtain relation between patient's coordinate at the surgical site and coordinate of three-dimensional model of femur. In this research, we evaluated the proposed navigation system using plastic model bone. It is expected that the surgical tool tracking position accuracy will be less than 1 mm.

  1. EXiO-A Brain-Controlled Lower Limb Exoskeleton for Rhesus Macaques.

    PubMed

    Vouga, Tristan; Zhuang, Katie Z; Olivier, Jeremy; Lebedev, Mikhail A; Nicolelis, Miguel A L; Bouri, Mohamed; Bleuler, Hannes

    2017-02-01

    Recent advances in the field of brain-machine interfaces (BMIs) have demonstrated enormous potential to shape the future of rehabilitation and prosthetic devices. Here, a lower-limb exoskeleton controlled by the intracortical activity of an awake behaving rhesus macaque is presented as a proof-of-concept for a locomotorBMI. A detailed description of the mechanical device, including its innovative features and first experimental results, is provided. During operation, BMI-decoded position and velocity are directly mapped onto the bipedal exoskeleton's motions, which then move the monkey's legs as the monkey remains physicallypassive. To meet the unique requirements of such an application, the exoskeleton's features include: high output torque with backdrivable actuation, size adjustability, and safe user-robot interface. In addition, a novel rope transmission is introduced and implemented. To test the performance of the exoskeleton, a mechanical assessment was conducted, which yielded quantifiable results for transparency, efficiency, stiffness, and tracking performance. Usage under both brain control and automated actuation demonstrates the device's capability to fulfill the demanding needs of this application. These results lay the groundwork for further advancement in BMI-controlled devices for primates including humans.

  2. Graphical user interface simplifies infusion pump programming and enhances the ability to detect pump-related faults.

    PubMed

    Syroid, Noah; Liu, David; Albert, Robert; Agutter, James; Egan, Talmage D; Pace, Nathan L; Johnson, Ken B; Dowdle, Michael R; Pulsipher, Daniel; Westenskow, Dwayne R

    2012-11-01

    Drug administration errors are frequent and are often associated with the misuse of IV infusion pumps. One source of these errors may be the infusion pump's user interface. We used failure modes-and-effects analyses to identify programming errors and to guide the design of a new syringe pump user interface. We designed the new user interface to clearly show the pump's operating state simultaneously in more than 1 monitoring location. We evaluated anesthesia residents in laboratory and simulated environments on programming accuracy and error detection between the new user interface and the user interface of a commercially available infusion pump. With the new user interface, we observed the number of programming errors reduced by 81%, the number of keystrokes per task reduced from 9.2 ± 5.0 to 7.5 ± 5.5 (mean ± SD), the time required per task reduced from 18.1 ± 14.1 seconds to 10.9 ± 9.5 seconds and significantly less perceived workload. Residents detected 38 of 70 (54%) of the events with the new user interface and 37 of 70 (53%) with the existing user interface, despite no experience with the new user interface and extensive experience with the existing interface. The number of programming errors and workload were reduced partly because it took less time and fewer keystrokes to program the pump when using the new user interface. Despite minimal training, residents quickly identified preexisting infusion pump problems with the new user interface. Intuitive and easy-to-program infusion pump interfaces may reduce drug administration errors and infusion pump-related adverse events.

  3. Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.

    PubMed

    Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar

    2016-05-01

    Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.

  4. Designing a Facebook interface for senior users.

    PubMed

    Gomes, Gonçalo; Duarte, Carlos; Coelho, José; Matos, Eduardo

    2014-01-01

    The adoption of social networks by older adults has increased in recent years. However, many still cannot make use of social networks as these are simply not adapted to them. Through a series of direct observations, interviews, and focus groups, we identified recommendations for the design of social networks targeting seniors. Based on these, we developed a prototype for tablet devices, supporting sharing and viewing Facebook content. We then conducted a user study comparing our prototype with Facebook's native mobile application. We have found that Facebook's native application does not meet senior users concerns, like privacy and family focus, while our prototype, designed in accordance with the collected recommendations, supported relevant use cases in a usable and accessible manner.

  5. Advanced Resistive Exercise Device (ARED) Flight Software (FSW): A Unique Approach to Exercise in Long Duration Habitats

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark

    2005-01-01

    ARED flight instrumentation software is associated with an overall custom designed resistive exercise system that will be deployed on the International Space Station (ISS). This innovative software application fuses together many diverse and new technologies into a robust and usable package. The software takes advantage of touchscreen user interface technology by providing a graphical user interface on a Windows based tablet PC, meeting a design constraint of keyboard-less interaction with flight crewmembers. The software interacts with modified commercial data acquisition (DAQ) hardware to acquire multiple channels of sensor measurment from the ARED device. This information is recorded on the tablet PC and made available, via International Space Station (ISS) Wireless LAN (WLAN) and telemetry subsystems, to ground based mission medics and trainers for analysis. The software includes a feature to accept electronically encoded prescriptions of exercises that guide crewmembers through a customized regimen of resistive weight training, based on personal analysis. These electronically encoded prescriptions are provided to the crew via ISS WLAN and telemetry subsystems. All personal data is securely associated with an individual crew member, based on a PIN ID mechanism.

  6. Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.

    PubMed

    Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R

    2012-01-01

    We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.

  7. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    NASA Astrophysics Data System (ADS)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  8. Power API Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-12-04

    The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less

  9. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  10. Wireless opto-electro neural interface for experiments with small freely behaving animals.

    PubMed

    Jia, Yaoyao; Khan, Wasif; Lee, Byunghun; Fan, Bin; Madi, Fatma; Weber, Arthur; Li, Wen; Ghovanloo, Maysam

    2018-05-25

    We have developed a wireless opto-electro interface (WOENI) device, which combines electrocorticogram (ECoG) recording and optical stimulation for bi-directional neuromodulation on small, freely behaving animals, such as rodents. The device is comprised of two components, a detachable headstage and an implantable polyimide-based substrate. The headstage establishes a Bluetooth Low Energy (BLE) bi-directional data communication with an external custom-designed USB dongle for receiving user commands and optogenetic stimulation patterns, and sending digitalized ECoG data. The functionality and stability of the device were evaluated in vivo on freely behaving rats. When the animal received optical stimulation on the primary visual cortex (V1) and visual stimulation via eyes, spontaneous changes in ECoG signals were recorded from both left and right V1 during 4 consecutive experiments with 7-day intervals over a time span of 21 days following device implantation. Immunostained tissue analyses showed results consistent with ECoG analyses, validating the efficacy of optical stimulation to upregulate the activity of cortical neurons expressing ChR2. The proposed WOENI device is potentially a versatile tool in the studies that involve long-term optogenetic neuromodulation. © 2018 IOP Publishing Ltd.

  11. Analysis and prediction of meal motion by EMG signals

    NASA Astrophysics Data System (ADS)

    Horihata, S.; Iwahara, H.; Yano, K.

    2007-12-01

    The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.

  12. Virtual reality applications to automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Hale, Joseph; Oneil, Daniel

    1991-01-01

    Virtual Reality (VR) is a rapidly developing Human/Computer Interface (HCI) technology. The evolution of high-speed graphics processors and development of specialized anthropomorphic user interface devices, that more fully involve the human senses, have enabled VR technology. Recently, the maturity of this technology has reached a level where it can be used as a tool in a variety of applications. This paper provides an overview of: VR technology, VR activities at Marshall Space Flight Center (MSFC), applications of VR to Automated Rendezvous and Capture (AR&C), and identifies areas of VR technology that requires further development.

  13. Eight microprocessor-based instrument data systems in the Galileo Orbiter spacecraft

    NASA Technical Reports Server (NTRS)

    Barry, R. C.

    1980-01-01

    Instrument data systems consist of a microprocessor, 3K bytes of Read Only Memory and 3K bytes of Random Access Memory. It interfaces with the spacecraft data bus through an isolated user interface with a direct memory access bus adaptor, and/or parallel data from instrument devices such as registers, buffers, analog to digital converters, multiplexers, and solid state sensors. These data systems support the spacecraft hardware and software communication protocol, decode and process instrument commands, generate continuous instrument operating modes, control the instrument mechanisms, acquire, process, format, and output instrument science data.

  14. 'Fly Like This': Natural Language Interface for UAV Mission Planning

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette

    2017-01-01

    With the increasing presence of unmanned aerial vehicles (UAVs) in everyday environments, the user base of these powerful and potentially intelligent machines is expanding beyond exclusively highly trained vehicle operators to include non-expert system users. Scientists seeking to augment costly and often inflexible methods of data collection historically used are turning towards lower cost and reconfigurable UAVs. These new users require more intuitive and natural methods for UAV mission planning. This paper explores two natural language interfaces - gesture and speech - for UAV flight path generation through individual user studies. Subjects who participated in the user studies also used a mouse-based interface for a baseline comparison. Each interface allowed the user to build flight paths from a library of twelve individual trajectory segments. Individual user studies evaluated performance, efficacy, and ease-of-use of each interface using background surveys, subjective questionnaires, and observations on time and correctness. Analysis indicates that natural language interfaces are promising alternatives to traditional interfaces. The user study data collected on the efficacy and potential of each interface will be used to inform future intuitive UAV interface design for non-expert users.

  15. Grasp specific and user friendly interface design for myoelectric hand prostheses.

    PubMed

    Mohammadi, Alireza; Lavranos, Jim; Howe, Rob; Choong, Peter; Oetomo, Denny

    2017-07-01

    This paper presents the design and characterisation of a hand prosthesis and its user interface, focusing on performing the most commonly used grasps in activities of daily living (ADLs). Since the operation of a multi-articulated powered hand prosthesis is difficult to learn and master, there is a significant rate of abandonment by amputees in preference for simpler devices. In choosing so, amputees chose to live with fewer features in their prosthesis that would more reliably perform the basic operations. In this paper, we look simultaneously at a hand prosthesis design method that aims for a small number of grasps, a low complexity user interface and an alternative method to the current use of EMG as a preshape selection method through the use of a simple button; to enable amputees to get to and execute the intended hand movements intuitively, quickly and reliably. An experiment is reported at the end of the paper comparing the speed and accuracy with which able-bodied naive subjects are able to select the intended preshapes through the use of a simplified EMG method and a simple button. It is shown that the button was significantly superior in the speed of successful task completion and marginally superior in accuracy (success of first attempt).

  16. Elasticity improves handgrip performance and user experience during visuomotor control

    PubMed Central

    Rinne, Paul; Liardon, Jean-Luc; Uhomoibhi, Catherine; Bentley, Paul; Burdet, Etienne

    2017-01-01

    Passive rehabilitation devices, providing motivation and feedback, potentially offer an automated and low-cost therapy method, and can be used as simple human–machine interfaces. Here, we ask whether there is any advantage for a hand-training device to be elastic, as opposed to rigid, in terms of performance and preference. To address this question, we have developed a highly sensitive and portable digital handgrip, promoting independent and repetitive rehabilitation of grasp function based around a novel elastic force and position sensing structure. A usability study was performed on 66 healthy subjects to assess the effect of elastic versus rigid handgrip control during various visuomotor tracking tasks. The results indicate that, for tasks relying either on feedforward or on feedback control, novice users perform significantly better with the elastic handgrip, compared with the rigid equivalent (11% relative improvement, 9–14% mean range; p < 0.01). Furthermore, there was a threefold increase in the number of subjects who preferred elastic compared with rigid handgrip interaction. Our results suggest that device compliance is an important design consideration for grip training devices. PMID:28386448

  17. Elasticity improves handgrip performance and user experience during visuomotor control.

    PubMed

    Mace, Michael; Rinne, Paul; Liardon, Jean-Luc; Uhomoibhi, Catherine; Bentley, Paul; Burdet, Etienne

    2017-02-01

    Passive rehabilitation devices, providing motivation and feedback, potentially offer an automated and low-cost therapy method, and can be used as simple human-machine interfaces. Here, we ask whether there is any advantage for a hand-training device to be elastic, as opposed to rigid, in terms of performance and preference. To address this question, we have developed a highly sensitive and portable digital handgrip, promoting independent and repetitive rehabilitation of grasp function based around a novel elastic force and position sensing structure. A usability study was performed on 66 healthy subjects to assess the effect of elastic versus rigid handgrip control during various visuomotor tracking tasks. The results indicate that, for tasks relying either on feedforward or on feedback control, novice users perform significantly better with the elastic handgrip, compared with the rigid equivalent (11% relative improvement, 9-14% mean range; p  < 0.01). Furthermore, there was a threefold increase in the number of subjects who preferred elastic compared with rigid handgrip interaction. Our results suggest that device compliance is an important design consideration for grip training devices.

  18. Development of a task analysis tool to facilitate user interface design

    NASA Technical Reports Server (NTRS)

    Scholtz, Jean C.

    1992-01-01

    A good user interface is one that facilitates the user in carrying out his task. Such interfaces are difficult and costly to produce. The most important aspect in producing a good interface is the ability to communicate to the software designers what the user's task is. The Task Analysis Tool is a system for cooperative task analysis and specification of the user interface requirements. This tool is intended to serve as a guide to development of initial prototypes for user feedback.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krempasky, J.; Flechsig, U.; Korhonen, T.

    Synchronous monochromator and insertion device energy scans were implemented at the Surfaces/Interfaces:Microscopy (SIM) beamline in order to provide the users fast X-ray magnetic dichroism studies (XMCD). A simple software control scheme is proposed based on a fast monochromator run-time energy readback which quickly updates the insertion device requested energy during an on-the-fly X-ray absorption scan (XAS). In this scheme the Plain Grating Monochromator (PGM) motion control, being much slower compared with the insertion device (APPLE-II type undulator), acts as a 'master' controlling the undulator 'slave' energy position. This master-slave software implementation exploits EPICS distributed device control over computer network andmore » allows for a quasi-synchronous motion control combined with data acquisition needed for the XAS or XMCD experiment.« less

  20. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation

    PubMed Central

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70–90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces. PMID:29046625

  1. CMCC Data Distribution Centre

    NASA Astrophysics Data System (ADS)

    Aloisio, Giovanni; Fiore, Sandro; Negro, A.

    2010-05-01

    The CMCC Data Distribution Centre (DDC) is the primary entry point (web gateway) to the CMCC. It is a Data Grid Portal providing a ubiquitous and pervasive way to ease data publishing, climate metadata search, datasets discovery, metadata annotation, data access, data aggregation, sub-setting, etc. The grid portal security model includes the use of HTTPS protocol for secure communication with the client (based on X509v3 certificates that must be loaded into the browser) and secure cookies to establish and maintain user sessions. The CMCC DDC is now in a pre-production phase and it is currently used only by internal users (CMCC researchers and climate scientists). The most important component already available in the CMCC DDC is the Search Engine which allows users to perform, through web interfaces, distributed search and discovery activities by introducing one or more of the following search criteria: horizontal extent (which can be specified by interacting with a geographic map), vertical extent, temporal extent, keywords, topics, creation date, etc. By means of this page the user submits the first step of the query process on the metadata DB, then, she can choose one or more datasets retrieving and displaying the complete XML metadata description (from the browser). This way, the second step of the query process is carried out by accessing to a specific XML document of the metadata DB. Finally, through the web interface, the user can access to and download (partially or totally) the data stored on the storage device accessing to OPeNDAP servers and to other available grid storage interfaces. Requests concerning datasets stored in deep storage will be served asynchronously.

  2. Assessing the Depth of Cognitive Processing as the Basis for Potential User-State Adaptation.

    PubMed

    Nicolae, Irina-Emilia; Acqualagna, Laura; Blankertz, Benjamin

    2017-01-01

    Objective: Decoding neurocognitive processes on a single-trial basis with Brain-Computer Interface (BCI) techniques can reveal the user's internal interpretation of the current situation. Such information can potentially be exploited to make devices and interfaces more user aware. In this line of research, we took a further step by studying neural correlates of different levels of cognitive processes and developing a method that allows to quantify how deeply presented information is processed in the brain. Methods/Approach: Seventeen participants took part in an EEG study in which we evaluated different levels of cognitive processing (no processing, shallow, and deep processing) within three distinct domains (memory, language, and visual imagination). Our investigations showed gradual differences in the amplitudes of event-related potentials (ERPs) and in the extend and duration of event-related desynchronization (ERD) which both correlate with task difficulty. We performed multi-modal classification to map the measured correlates of neurocognitive processing to the corresponding level of processing. Results: Successful classification of the neural components was achieved, which reflects the level of cognitive processing performed by the participants. The results show performances above chance level for each participant and a mean performance of 70-90% for all conditions and classification pairs. Significance: The successful estimation of the level of cognition on a single-trial basis supports the feasibility of user-state adaptation based on ongoing neural activity. There is a variety of potential use cases such as: a user-friendly adaptive design of an interface or the development of assistance systems in safety critical workplaces.

  3. Time Pattern Locking Scheme for Secure Multimedia Contents in Human-Centric Device

    PubMed Central

    Kim, Hyun-Woo; Kim, Jun-Ho; Park, Jong Hyuk; Jeong, Young-Sik

    2014-01-01

    Among the various smart multimedia devices, multimedia smartphones have become the most widespread due to their convenient portability and real-time information sharing, as well as various other built-in features. Accordingly, since personal and business activities can be carried out using multimedia smartphones without restrictions based on time and location, people have more leisure time and convenience than ever. However, problems such as loss, theft, and information leakage because of convenient portability have also increased proportionally. As a result, most multimedia smartphones are equipped with various built-in locking features. Pattern lock, personal identification numbers, and passwords are the most used locking features on current smartphones, but these are vulnerable to shoulder surfing and smudge attacks, allowing malicious users to bypass the security feature easily. In particular, the smudge attack technique is a convenient way to unlock multimedia smartphones after they have been stolen. In this paper, we propose the secure locking screen using time pattern (SLSTP) focusing on improved security and convenience for users to support human-centric multimedia device completely. The SLSTP can provide a simple interface to users and reduce the risk factors pertaining to security leakage to malicious third parties. PMID:25202737

  4. Pen-Based Interface Using Hand Motions in the Air

    NASA Astrophysics Data System (ADS)

    Suzuki, Yu; Misue, Kazuo; Tanaka, Jiro

    A system which employs a stylus as an input device is suitable for creative activities like writing and painting. However, such a system does not always provide the user with a GUI that is easy to operate using the stylus. In addition, system usability is diminished because the stylus is not always integrated into the system in a way that takes into consideration the features of a pen. The purpose of our research is to improve the usability of a system which uses a stylus as an input device. We propose shortcut actions, which are interaction techniques for operation with a stylus that are controlled through a user's hand motions made in the air. We developed the Context Sensitive Stylus as a device to implement the shortcut actions. The Context Sensitive Stylus consists of an accelerometer and a conventional stylus. We also developed application programs to which we applied the shortcut actions; e.g., a drawing tool, a scroll supporting tool, and so on. Results from our evaluation of the shortcut actions indicate that users can concentrate better on their work when using the shortcut actions than when using conventional menu operations.

  5. Time pattern locking scheme for secure multimedia contents in human-centric device.

    PubMed

    Kim, Hyun-Woo; Kim, Jun-Ho; Park, Jong Hyuk; Jeong, Young-Sik

    2014-01-01

    Among the various smart multimedia devices, multimedia smartphones have become the most widespread due to their convenient portability and real-time information sharing, as well as various other built-in features. Accordingly, since personal and business activities can be carried out using multimedia smartphones without restrictions based on time and location, people have more leisure time and convenience than ever. However, problems such as loss, theft, and information leakage because of convenient portability have also increased proportionally. As a result, most multimedia smartphones are equipped with various built-in locking features. Pattern lock, personal identification numbers, and passwords are the most used locking features on current smartphones, but these are vulnerable to shoulder surfing and smudge attacks, allowing malicious users to bypass the security feature easily. In particular, the smudge attack technique is a convenient way to unlock multimedia smartphones after they have been stolen. In this paper, we propose the secure locking screen using time pattern (SLSTP) focusing on improved security and convenience for users to support human-centric multimedia device completely. The SLSTP can provide a simple interface to users and reduce the risk factors pertaining to security leakage to malicious third parties.

  6. Interface Prostheses With Classifier-Feedback-Based User Training.

    PubMed

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.

  7. A Systematic Review of Tablet Computers and Portable Media Players as Speech Generating Devices for Individuals with Autism Spectrum Disorder.

    PubMed

    Lorah, Elizabeth R; Parnell, Ashley; Whitby, Peggy Schaefer; Hantula, Donald

    2015-12-01

    Powerful, portable, off-the-shelf handheld devices, such as tablet based computers (i.e., iPad(®); Galaxy(®)) or portable multimedia players (i.e., iPod(®)), can be adapted to function as speech generating devices for individuals with autism spectrum disorders or related developmental disabilities. This paper reviews the research in this new and rapidly growing area and delineates an agenda for future investigations. In general, participants using these devices acquired verbal repertoires quickly. Studies comparing these devices to picture exchange or manual sign language found that acquisition was often quicker when using a tablet computer and that the vast majority of participants preferred using the device to picture exchange or manual sign language. Future research in interface design, user experience, and extended verbal repertoires is recommended.

  8. Formal analysis and automatic generation of user interfaces: approach, methodology, and an algorithm.

    PubMed

    Heymann, Michael; Degani, Asaf

    2007-04-01

    We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.

  9. An electric stimulation system for electrokinetic particle manipulation in microfluidic devices.

    PubMed

    Lopez-de la Fuente, M S; Moncada-Hernandez, H; Perez-Gonzalez, V H; Lapizco-Encinas, B H; Martinez-Chapa, S O

    2013-03-01

    Microfluidic devices have grown significantly in the number of applications. Microfabrication techniques have evolved considerably; however, electric stimulation systems for microdevices have not advanced at the same pace. Electric stimulation of micro-fluidic devices is an important element in particle manipulation research. A flexible stimulation instrument is desired to perform configurable, repeatable, automated, and reliable experiments by allowing users to select the stimulation parameters. The instrument presented here is a configurable and programmable stimulation system for electrokinetic-driven microfluidic devices; it consists of a processor, a memory system, and a user interface to deliver several types of waveforms and stimulation patterns. It has been designed to be a flexible, highly configurable, low power instrument capable of delivering sine, triangle, and sawtooth waveforms with one single frequency or two superimposed frequencies ranging from 0.01 Hz to 40 kHz, and an output voltage of up to 30 Vpp. A specific stimulation pattern can be delivered over a single time period or as a sequence of different signals for different time periods. This stimulation system can be applied as a research tool where manipulation of particles suspended in liquid media is involved, such as biology, medicine, environment, embryology, and genetics. This system has the potential to lead to new schemes for laboratory procedures by allowing application specific and user defined electric stimulation. The development of this device is a step towards portable and programmable instrumentation for electric stimulation on electrokinetic-based microfluidic devices, which are meant to be integrated with lab-on-a-chip devices.

  10. An electric stimulation system for electrokinetic particle manipulation in microfluidic devices

    NASA Astrophysics Data System (ADS)

    Lopez-de la Fuente, M. S.; Moncada-Hernandez, H.; Perez-Gonzalez, V. H.; Lapizco-Encinas, B. H.; Martinez-Chapa, S. O.

    2013-03-01

    Microfluidic devices have grown significantly in the number of applications. Microfabrication techniques have evolved considerably; however, electric stimulation systems for microdevices have not advanced at the same pace. Electric stimulation of micro-fluidic devices is an important element in particle manipulation research. A flexible stimulation instrument is desired to perform configurable, repeatable, automated, and reliable experiments by allowing users to select the stimulation parameters. The instrument presented here is a configurable and programmable stimulation system for electrokinetic-driven microfluidic devices; it consists of a processor, a memory system, and a user interface to deliver several types of waveforms and stimulation patterns. It has been designed to be a flexible, highly configurable, low power instrument capable of delivering sine, triangle, and sawtooth waveforms with one single frequency or two superimposed frequencies ranging from 0.01 Hz to 40 kHz, and an output voltage of up to 30 Vpp. A specific stimulation pattern can be delivered over a single time period or as a sequence of different signals for different time periods. This stimulation system can be applied as a research tool where manipulation of particles suspended in liquid media is involved, such as biology, medicine, environment, embryology, and genetics. This system has the potential to lead to new schemes for laboratory procedures by allowing application specific and user defined electric stimulation. The development of this device is a step towards portable and programmable instrumentation for electric stimulation on electrokinetic-based microfluidic devices, which are meant to be integrated with lab-on-a-chip devices.

  11. Intelligent user interface concept for space station

    NASA Technical Reports Server (NTRS)

    Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen

    1986-01-01

    The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.

  12. Control of a powered prosthetic device via a pinch gesture interface

    NASA Astrophysics Data System (ADS)

    Yetkin, Oguz; Wallace, Kristi; Sanford, Joseph D.; Popa, Dan O.

    2015-06-01

    A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user's sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.

  13. Lighnting detection and tracking with consumer electronics

    NASA Astrophysics Data System (ADS)

    Kamau, Gilbert; van de Giesen, Nick

    2015-04-01

    Lightning data is not only important for environment and weather monitoring but also for safety purposes. The AS3935 Franklin Lightning Sensor (AMS, Unterpremstaetten, Austria) is a lightning sensor developed for inclusion in consumer electronics such as watches and mobile phones. The AS3935 is small (4mmx4mm) and relatively cost effective (Eu 5). The downside is that only rough distance estimates are provided, as average power is assumed for every lightning strike. To be able to track lightning, a network of devices that monitor and keep track of occurrences of lightning strikes was developed. A communication interface was established between the sensors, a data logging circuit and a microcontroller. The digital outputs of the lightning sensor and data from a GPS are processed by the microcontroller and logged onto an SD card. The interface program enables sampling parameters such as distance from the lightning strike, time of strike occurrence and geographical location of the device. For archiving and analysis purposes, the data can be transferred from the SD card to a PC and results displayed using a graphical user interface program. Data gathered shows that the device can track the frequency and movement of lightning strikes in an area. The device has many advantages as compared to other lightning sensor stations in terms of huge memory, lower power consumption, small size, greater portability and lower cost. The devices were used in a network around Nairobi, Kenya. Through multi-lateration, lightning strikes could be located with a RMSE of 2 km or better.

  14. Towards Intelligent Environments: An Augmented Reality–Brain–Machine Interface Operated with a See-Through Head-Mount Display

    PubMed Central

    Takano, Kouji; Hata, Naoki; Kansaku, Kenji

    2011-01-01

    The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307

  15. Factors associated with interest in novel interfaces for upper limb prosthesis control

    PubMed Central

    Engdahl, Susannah M.; Chestek, Cynthia A.; Kelly, Brian; Davis, Alicia

    2017-01-01

    Background Surgically invasive interfaces for upper limb prosthesis control may allow users to operate advanced, multi-articulated devices. Given the potential medical risks of these invasive interfaces, it is important to understand what factors influence an individual’s decision to try one. Methods We conducted an anonymous online survey of individuals with upper limb loss. A total of 232 participants provided personal information (such as age, amputation level, etc.) and rated how likely they would be to try noninvasive (myoelectric) and invasive (targeted muscle reinnervation, peripheral nerve interfaces, cortical interfaces) interfaces for prosthesis control. Bivariate relationships between interest in each interface and 16 personal descriptors were examined. Significant variables from the bivariate analyses were then entered into multiple logistic regression models to predict interest in each interface. Results While many of the bivariate relationships were significant, only a few variables remained significant in the regression models. The regression models showed that participants were more likely to be interested in all interfaces if they had unilateral limb loss (p ≤ 0.001, odds ratio ≥ 2.799). Participants were more likely to be interested in the three invasive interfaces if they were younger (p < 0.001, odds ratio ≤ 0.959) and had acquired limb loss (p ≤ 0.012, odds ratio ≥ 3.287). Participants who used a myoelectric device were more likely to be interested in myoelectric control than those who did not (p = 0.003, odds ratio = 24.958). Conclusions Novel prosthesis control interfaces may be accepted most readily by individuals who are young, have unilateral limb loss, and/or have acquired limb loss However, this analysis did not include all possible factors that may have influenced participant’s opinions on the interfaces, so additional exploration is warranted. PMID:28767716

  16. Factors associated with interest in novel interfaces for upper limb prosthesis control.

    PubMed

    Engdahl, Susannah M; Chestek, Cynthia A; Kelly, Brian; Davis, Alicia; Gates, Deanna H

    2017-01-01

    Surgically invasive interfaces for upper limb prosthesis control may allow users to operate advanced, multi-articulated devices. Given the potential medical risks of these invasive interfaces, it is important to understand what factors influence an individual's decision to try one. We conducted an anonymous online survey of individuals with upper limb loss. A total of 232 participants provided personal information (such as age, amputation level, etc.) and rated how likely they would be to try noninvasive (myoelectric) and invasive (targeted muscle reinnervation, peripheral nerve interfaces, cortical interfaces) interfaces for prosthesis control. Bivariate relationships between interest in each interface and 16 personal descriptors were examined. Significant variables from the bivariate analyses were then entered into multiple logistic regression models to predict interest in each interface. While many of the bivariate relationships were significant, only a few variables remained significant in the regression models. The regression models showed that participants were more likely to be interested in all interfaces if they had unilateral limb loss (p ≤ 0.001, odds ratio ≥ 2.799). Participants were more likely to be interested in the three invasive interfaces if they were younger (p < 0.001, odds ratio ≤ 0.959) and had acquired limb loss (p ≤ 0.012, odds ratio ≥ 3.287). Participants who used a myoelectric device were more likely to be interested in myoelectric control than those who did not (p = 0.003, odds ratio = 24.958). Novel prosthesis control interfaces may be accepted most readily by individuals who are young, have unilateral limb loss, and/or have acquired limb loss However, this analysis did not include all possible factors that may have influenced participant's opinions on the interfaces, so additional exploration is warranted.

  17. Standards for the user interface - Developing a user consensus. [for Space Station Information System

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.

    1987-01-01

    The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.

  18. Self-Fitting Hearing Aids

    PubMed Central

    Convery, Elizabeth

    2016-01-01

    A self-contained, self-fitting hearing aid (SFHA) is a device that enables the user to perform both threshold measurements leading to a prescribed hearing aid setting and fine-tuning, without the need for audiological support or access to other equipment. The SFHA has been proposed as a potential solution to address unmet hearing health care in developing countries and remote locations in the developed world and is considered a means to lower cost and increase uptake of hearing aids in developed countries. This article reviews the status of the SFHA and the evidence for its feasibility and challenges and predicts where it is heading. Devices that can be considered partly or fully self-fitting without audiological support were identified in the direct-to-consumer market. None of these devices are considered self-contained as they require access to other hardware such as a proprietary interface, computer, smartphone, or tablet for manipulation. While there is evidence that self-administered fitting processes can provide valid and reliable results, their success relies on user-friendly device designs and interfaces and easy-to-interpret instructions. Until these issues have been sufficiently addressed, optional assistance with the self-fitting process and on-going use of SFHAs is recommended. Affordability and a sustainable delivery system remain additional challenges for the SFHA in developing countries. Future predictions include a growth in self-fitting products, with most future SFHAs consisting of earpieces that connect wirelessly with a smartphone and providers offering assistance through a telehealth infrastructure, and the integration of SFHAs into the traditional hearing health-care model. PMID:27072929

  19. E-SMART system for in-situ detection of environmental contaminants. Quarterly technical progress report, July--September 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-10-01

    General Atomics (GA) leads a team of industrial, academic, and government organizations to develop the Environmental Systems Management, Analysis and Reporting neTwork (E-SMART) for the Defense Advanced Research Project Agency (DARPA), by way of this Technology Reinvestment Project (TRP). E-SMART defines a standard by which networks of smart sensing, sampling, and control devices can interoperate. E-SMART is intended to be an open standard, available to any equipment manufacturer. The user will be provided a standard platform on which a site-specific monitoring plan can be implemented using sensors and actuators from various manufacturers and upgraded as new monitoring devices become commerciallymore » available. This project will further develop and advance the E-SMART standardized network protocol to include new sensors, sampling systems, and graphical user interfaces.« less

  20. Seamless presentation capture, indexing, and management

    NASA Astrophysics Data System (ADS)

    Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel

    2005-10-01

    Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.

  1. Dynamics modeling for parallel haptic interfaces with force sensing and control.

    PubMed

    Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy

    2013-01-01

    Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation.

  2. Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface

    PubMed Central

    Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele

    2017-01-01

    A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. PMID:28961198

  3. Assessment of Application Technology of Natural User Interfaces in the Creation of a Virtual Chemical Laboratory

    NASA Astrophysics Data System (ADS)

    Jagodziński, Piotr; Wolski, Robert

    2015-02-01

    Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar to those that they perform in a real laboratory. Kinect sensor was used for the detection and analysis of the student's hand movements, which is an example of NUI. The studies conducted found the effectiveness of educational virtual laboratory. The extent to which the use of a teaching aid increased the students' progress in learning chemistry was examined. The results indicate that the use of NUI creates opportunities to both enhance and improve the quality of the chemistry education. Working in a virtual laboratory using the Kinect interface results in greater emotional involvement and an increased sense of self-efficacy in the laboratory work among students. As a consequence, students are getting higher marks and are more interested in the subject of chemistry.

  4. Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface.

    PubMed

    Aleotti, Jacopo; Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele; Zappettini, Andrea

    2017-09-29

    A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source.

  5. Layered approach to workstation design for medical image viewing

    NASA Astrophysics Data System (ADS)

    Haynor, David R.; Zick, Gregory L.; Heritage, Marcus B.; Kim, Yongmin

    1992-07-01

    Software engineering principles suggest that complex software systems are best constructed from independent, self-contained modules, thereby maximizing the portability, maintainability and modifiability of the produced code. This principal is important in the design of medical imaging workstations, where further developments in technology (CPU, memory, interface devices, displays, network connections) are required for clinically acceptable workstations, and it is desirable to provide different hardware platforms with the ''same look and feel'' for the user. In addition, the set of desired functions is relatively well understood, but the optimal user interface for delivering these functions on a clinically acceptable workstation is still different depending on department, specialty, or individual preference. At the University of Washington, we are developing a viewing station based on the IBM RISC/6000 computer and on new technologies that are just becoming commercially available. These include advanced voice recognition systems and an ultra-high-speed network. We are developing a set of specifications and a conceptual design for the workstation, and will be producing a prototype. This paper presents our current concepts concerning the architecture and software system design of the future prototype. Our conceptual design specifies requirements for a Database Application Programming Interface (DBAPI) and for a User API (UAPI). The DBAPI consists of a set of subroutine calls that define the admissible transactions between the workstation and an image archive. The UAPI describes the requests a user interface program can make of the workstation. It incorporates basic display and image processing functions, yet is specifically designed to allow extensions to the basic set at the application level. We will discuss the fundamental elements of the two API''s and illustrate their application to workstation design.

  6. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.

    PubMed

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2017-04-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.

  7. Development of a platform-independent receiver control system for SISIFOS

    NASA Astrophysics Data System (ADS)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  8. MEMS analog light processing: an enabling technology for adaptive optical phase control

    NASA Astrophysics Data System (ADS)

    Gehner, Andreas; Wildenhain, Michael; Neumann, Hannes; Knobbe, Jens; Komenda, Ondrej

    2006-01-01

    Various applications in modern optics are demanding for Spatial Light Modulators (SLM) with a true analog light processing capability, e.g. the generation of arbitrary analog phase patterns for an adaptive optical phase control. For that purpose the Fraunhofer IPMS has developed a high-resolution MEMS Micro Mirror Array (MMA) with an integrated active-matrix CMOS address circuitry. The device provides 240 x 200 piston-type mirror elements with 40 μm pixel size, where each of them can be addressed and deflected independently at an 8bit height resolution with a vertical analog deflection range of up to 400 nm suitable for a 2pi phase modulation in the visible. Full user programmability and control is provided by a newly developed comfortable driver software for Windows XP based PCs supporting both a Graphical User Interface (GUI) for stand-alone operation with pre-defined data patterns as well as an open ActiveX programming interface for a direct data feed-through within a closed-loop environment. High-speed data communication is established by an IEEE1394a FireWire interface together with an electronic driving board performing the actual MMA programming and control at a maximum frame rate of up to 500 Hz. Successful application demonstrations have been given in eye aberration correction, coupling efficiency optimization into a monomode fiber, ultra-short laser pulse modulation and diffractive beam shaping. Besides a presentation of the basic device concept the paper will give an overview of the obtained results from these applications.

  9. Two complementary personal medication management applications developed on a common platform: case report.

    PubMed

    Ross, Stephen E; Johnson, Kevin B; Siek, Katie A; Gordon, Jeffry S; Khan, Danish U; Haverhals, Leah M

    2011-07-12

    Adverse drug events are a major safety issue in ambulatory care. Improving medication self-management could reduce these adverse events. Researchers have developed medication applications for tethered personal health records (PHRs), but little has been reported about medication applications for interoperable PHRs. Our objective was to develop two complementary personal health applications on a common PHR platform: one to assist children with complex health needs (MyMediHealth), and one to assist older adults in care transitions (Colorado Care Tablet). The applications were developed using a user-centered design approach. The two applications shared a common PHR platform based on a service-oriented architecture. MyMediHealth employed Web and mobile phone user interfaces. Colorado Care Tablet employed a Web interface customized for a tablet PC. We created complementary medication management applications tailored to the needs of distinctly different user groups using common components. Challenges were addressed in multiple areas, including how to encode medication identities, how to incorporate knowledge bases for medication images and consumer health information, how to include supplementary dosing information, how to simplify user interfaces for older adults, and how to support mobile devices for children. These prototypes demonstrate the utility of abstracting PHR data and services (the PHR platform) from applications that can be tailored to meet the needs of diverse patients. Based on the challenges we faced, we provide recommendations on the structure of publicly available knowledge resources and the use of mobile messaging systems for PHR applications.

  10. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.

  11. Subtitle Synchronization across Multiple Screens and Devices

    PubMed Central

    Rodriguez-Alsina, Aitor; Talavera, Guillermo; Orero, Pilar; Carrabina, Jordi

    2012-01-01

    Ambient Intelligence is a new paradigm in which environments are sensitive and responsive to the presence of people. This is having an increasing importance in multimedia applications, which frequently rely on sensors to provide useful information to the user. In this context, multimedia applications must adapt and personalize both content and interfaces in order to reach acceptable levels of context-specific quality of service for the user, and enable the content to be available anywhere and at any time. The next step is to make content available to everybody in order to overcome the existing access barriers to content for users with specific needs, or else to adapt to different platforms, hence making content fully usable and accessible. Appropriate access to video content, for instance, is not always possible due to the technical limitations of traditional video packaging, transmission and presentation. This restricts the flexibility of subtitles and audio-descriptions to be adapted to different devices, contexts and users. New Web standards built around HTML5 enable more featured applications with better adaptation and personalization facilities, and thus would seem more suitable for accessible AmI environments. This work presents a video subtitling system that enables the customization, adaptation and synchronization of subtitles across different devices and multiple screens. The benefits of HTML5 applications for building the solution are analyzed along with their current platform support. Moreover, examples of the use of the application in three different cases are presented. Finally, the user experience of the solution is evaluated. PMID:23012513

  12. Subtitle synchronization across multiple screens and devices.

    PubMed

    Rodriguez-Alsina, Aitor; Talavera, Guillermo; Orero, Pilar; Carrabina, Jordi

    2012-01-01

    Ambient Intelligence is a new paradigm in which environments are sensitive and responsive to the presence of people. This is having an increasing importance in multimedia applications, which frequently rely on sensors to provide useful information to the user. In this context, multimedia applications must adapt and personalize both content and interfaces in order to reach acceptable levels of context-specific quality of service for the user, and enable the content to be available anywhere and at any time. The next step is to make content available to everybody in order to overcome the existing access barriers to content for users with specific needs, or else to adapt to different platforms, hence making content fully usable and accessible. Appropriate access to video content, for instance, is not always possible due to the technical limitations of traditional video packaging, transmission and presentation. This restricts the flexibility of subtitles and audio-descriptions to be adapted to different devices, contexts and users. New Web standards built around HTML5 enable more featured applications with better adaptation and personalization facilities, and thus would seem more suitable for accessible AmI environments. This work presents a video subtitling system that enables the customization, adaptation and synchronization of subtitles across different devices and multiple screens. The benefits of HTML5 applications for building the solution are analyzed along with their current platform support. Moreover, examples of the use of the application in three different cases are presented. Finally, the user experience of the solution is evaluated.

  13. Measuring Presence in Virtual Environments

    DTIC Science & Technology

    1994-10-01

    viewpoint to change what they see, or to reposition their head to affect binaural hearing, or to search the environment haptically, they will experience a...increase presence in an alternate environment. For example a head mounted display that isolates the user from the real world may increase the sense...movement interface devices such as treadmills and trampolines , different gloves, and auditory equipment. Even as a low end technological implementation of

  14. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  15. Graphical User Interface Programming in Introductory Computer Science.

    ERIC Educational Resources Information Center

    Skolnick, Michael M.; Spooner, David L.

    Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…

  16. Voice and gesture-based 3D multimedia presentation tool

    NASA Astrophysics Data System (ADS)

    Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro

    2007-09-01

    This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.

  17. Building intuitive 3D interfaces for virtual reality systems

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  18. An augmented reality haptic training simulator for spinal needle procedures.

    PubMed

    Sutherland, Colin; Hashtrudi-Zaad, Keyvan; Sellens, Rick; Abolmaesumi, Purang; Mousavi, Parvin

    2013-11-01

    This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system.

  19. A web based Radiation Oncology Dose Manager with a rich User Interface developed using AJAX, ruby, dynamic XHTML and the new Yahoo/EXT User Interface Library.

    PubMed

    Vali, Faisal; Hong, Robert

    2007-10-11

    With the evolution of AJAX, ruby on rails, advanced dynamic XHTML technologies and the advent of powerful user interface libraries for javascript (EXT, Yahoo User Interface Library), developers now have the ability to provide truly rich interfaces within web browsers, with reasonable effort and without third-party plugins. We designed and developed an example of such a solution. The User Interface allows radiation oncology practices to intuitively manage different dose fractionation schemes by helping estimate total dose to irradiated organs.

  20. Resistive Exercise Device

    NASA Technical Reports Server (NTRS)

    Smith, Damon C. (Inventor)

    2005-01-01

    An exercise device 10 is particularly well suited for use in low gravity environments, and includes a frame 12 with plurality of resistance elements 30,82 supported in parallel on the frame. A load transfer member 20 is moveable relative to the frame for transferring the applied force to the free end of each captured resistance element. Load selection template 14 is removably secured both to the load transfer member, and a plurality of capture mechanisms engage the free end of corresponding resistance elements. The force applying mechanism 53 may be a handle, harness or other user interface for applying a force to move the load transfer member.

  1. Rapid Diagnosis of an Ulnar Fracture with Portable Hand-Held Ultrasound

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, Andrew W.; Brown, Ross; Diebel, Lawrence N.; Nicolaou, Savvas; Marshburn, Tom; Dulchavsky, Scott A.

    2002-01-01

    Orthopedic fractures are a common injury in operational activities, injuries that often occur in isolated or hostile environments. Clinical ultrasound devices have become more user friendly and lighter allowing them to be easily transported with forward medical teams. The bone-soft tissue interface has a very large acoustic impedance, with a high reflectance that can be used to visualize breaks in contour including fractures. Herein reported is a case of an ulnar fracture that was quickly visualized in the early phase of a multi-system trauma resuscitation with a hand-held ultrasound device. The implications for operational medicine are discussed.

  2. Personal mobility and manipulation using robotics, artificial intelligence and advanced control.

    PubMed

    Cooper, Rory A; Ding, Dan; Grindle, Garrett G; Wang, Hongwu

    2007-01-01

    Recent advancements of technologies, including computation, robotics, machine learning, communication, and miniaturization technologies, bring us closer to futuristic visions of compassionate intelligent devices. The missing element is a basic understanding of how to relate human functions (physiological, physical, and cognitive) to the design of intelligent devices and systems that aid and interact with people. Our stakeholder and clinician consultants identified a number of mobility barriers that have been intransigent to traditional approaches. The most important physical obstacles are stairs, steps, curbs, doorways (doors), rough/uneven surfaces, weather hazards (snow, ice), crowded/cluttered spaces, and confined spaces. Focus group participants suggested a number of ways to make interaction simpler, including natural language interfaces such as the ability to say "I want a drink", a library of high level commands (open a door, park the wheelchair, ...), and a touchscreen interface with images so the user could point and use other gestures.

  3. Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System

    PubMed Central

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2008-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085

  4. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    PubMed

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.

  5. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Alam, Salim; Pinkney, Paul

    1992-01-01

    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

  6. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  7. User interface support

    NASA Technical Reports Server (NTRS)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  8. Interface Metaphors for Interactive Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasper, Robert J.; Blaha, Leslie M.

    To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less

  9. MOtoNMS: A MATLAB toolbox to process motion data for neuromusculoskeletal modeling and simulation.

    PubMed

    Mantoan, Alice; Pizzolato, Claudio; Sartori, Massimo; Sawacha, Zimi; Cobelli, Claudio; Reggiani, Monica

    2015-01-01

    Neuromusculoskeletal modeling and simulation enable investigation of the neuromusculoskeletal system and its role in human movement dynamics. These methods are progressively introduced into daily clinical practice. However, a major factor limiting this translation is the lack of robust tools for the pre-processing of experimental movement data for their use in neuromusculoskeletal modeling software. This paper presents MOtoNMS (matlab MOtion data elaboration TOolbox for NeuroMusculoSkeletal applications), a toolbox freely available to the community, that aims to fill this lack. MOtoNMS processes experimental data from different motion analysis devices and generates input data for neuromusculoskeletal modeling and simulation software, such as OpenSim and CEINMS (Calibrated EMG-Informed NMS Modelling Toolbox). MOtoNMS implements commonly required processing steps and its generic architecture simplifies the integration of new user-defined processing components. MOtoNMS allows users to setup their laboratory configurations and processing procedures through user-friendly graphical interfaces, without requiring advanced computer skills. Finally, configuration choices can be stored enabling the full reproduction of the processing steps. MOtoNMS is released under GNU General Public License and it is available at the SimTK website and from the GitHub repository. Motion data collected at four institutions demonstrate that, despite differences in laboratory instrumentation and procedures, MOtoNMS succeeds in processing data and producing consistent inputs for OpenSim and CEINMS. MOtoNMS fills the gap between motion analysis and neuromusculoskeletal modeling and simulation. Its support to several devices, a complete implementation of the pre-processing procedures, its simple extensibility, the available user interfaces, and its free availability can boost the translation of neuromusculoskeletal methods in daily and clinical practice.

  10. Representing Graphical User Interfaces with Sound: A Review of Approaches

    ERIC Educational Resources Information Center

    Ratanasit, Dan; Moore, Melody M.

    2005-01-01

    The inability of computer users who are visually impaired to access graphical user interfaces (GUIs) has led researchers to propose approaches for adapting GUIs to auditory interfaces, with the goal of providing access for visually impaired people. This article outlines the issues involved in nonvisual access to graphical user interfaces, reviews…

  11. Stand-alone digital data storage control system including user control interface

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)

    1994-01-01

    A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.

  12. Quantifying medical student clinical experiences via an ICD Code Logging App.

    PubMed

    Rawlins, Fred; Sumpter, Cameron; Sutphin, Dean; Garner, Harold R

    2018-03-01

    The logging of ICD Diagnostic, Procedure and Drug codes is one means of tracking the experience of medical students' clinical rotations. The goal is to create a web-based computer and mobile application to track the progress of trainees, monitor the effectiveness of their training locations and be a means of sampling public health status. We have developed a web-based app in which medical trainees make entries via a simple and quick interface optimized for both mobile devices and personal computers. For each patient interaction, users enter ICD diagnostic, procedure, and drug codes via a hierarchical or search entry interface, as well as patient demographics (age range and gender, but no personal identifiers), and free-text notes. Users and administrators can review and edit input via a series of output interfaces. The user interface and back-end database are provided via dual redundant failover Linux servers. Students master the interface in ten minutes, and thereafter complete entries in less than one minute. Five hundred-forty 3rd year VCOM students each averaged 100 entries in the first four week clinical rotation. Data accumulated in various Appalachian clinics and Central American medical mission trips has demonstrated the public health surveillance utility of the application. PC and mobile apps can be used to collect medical trainee experience in real time or near real-time, quickly, and efficiently. This system has collected 75,596 entries to date, less than 2% of trainees have needed assistance to become proficient, and medical school administrators are using the various summaries to evaluate students and compare different rotation sites. Copyright © 2017. Published by Elsevier B.V.

  13. An assembly-type master-slave catheter and guidewire driving system for vascular intervention.

    PubMed

    Cha, Hyo-Jeong; Yi, Byung-Ju; Won, Jong Yun

    2017-01-01

    Current vascular intervention inevitably exposes a large amount of X-ray to both an operator and a patient during the procedure. The purpose of this study is to propose a new catheter driving system which assists the operator in aspects of less X-ray exposure and convenient user interface. For this, an assembly-type 4-degree-of-freedom master-slave system was designed and tested to verify the efficiency. First, current vascular intervention procedures are analyzed to develop a new robotic procedure that enables us to use conventional vascular intervention devices such as catheter and guidewire which are commercially available in the market. Some parts of the slave robot which contact the devices were designed to be easily assembled and dissembled from the main body of the slave robot for sterilization. A master robot is compactly designed to conduct insertion and rotational motion and is able to switch from the guidewire driving mode to the catheter driving mode or vice versa. A phantom resembling the human arteries was developed, and the master-slave robotic system is tested using the phantom. The contact force of the guidewire tip according to the shape of the arteries is measured and reflected to the user through the master robot during the phantom experiment. This system can drastically reduce radiation exposure by replacing human effort by a robotic system for high radiation exposure procedures. Also, benefits of the proposed robot system are low cost by employing currently available devices and easy human interface.

  14. HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.

    PubMed

    Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael

    2017-07-01

    Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Vacuum status-display and sector-conditioning programs

    NASA Astrophysics Data System (ADS)

    Skelly, J.; Yen, S.

    1990-08-01

    Two programs have been developed for observation and control of the AGS vacuum system, which include the following notable features: (1) they incorporate a graphical user interface and (2) they are driven by a relational database which describes the vacuum system. The vacuum system comprises some 440 devices organized into 28 vacuum sectors. The status-display program invites menu selection of a sector, interrogates the relational database for relevant vacuum devices, acquires live readbacks and posts a graphical display of their status. The sector-conditioning program likewise invites sector selection, produces the same status display and also implements process control logic on the sector devices to pump the sector down from atmospheric pressure to high vacuum over a period extending several hours. As additional devices are installed in the vacuum system, the devices are added to the relational database; these programs then automatically include the new devices.

  16. Comparing two anesthesia information management system user interfaces: a usability evaluation.

    PubMed

    Wanderer, Jonathan P; Rao, Anoop V; Rothwell, Sarah H; Ehrenfeld, Jesse M

    2012-11-01

    Anesthesia information management systems (AIMS) have been developed by multiple vendors and are deployed in thousands of operating rooms around the world, yet not much is known about measuring and improving AIMS usability. We developed a methodology for evaluating AIMS usability in a low-fidelity simulated clinical environment and used it to compare an existing user interface with a revised version. We hypothesized that the revised user interface would be more useable. In a low-fidelity simulated clinical environment, twenty anesthesia providers documented essential anesthetic information for the start of the case using both an existing and a revised user interface. Participants had not used the revised user interface previously and completed a brief training exercise prior to the study task. All participants completed a workload assessment and a satisfaction survey. All sessions were recorded. Multiple usability metrics were measured. The primary outcome was documentation accuracy. Secondary outcomes were perceived workload, number of documentation steps, number of user interactions, and documentation time. The interfaces were compared and design problems were identified by analyzing recorded sessions and survey results. Use of the revised user interface was shown to improve documentation accuracy from 85.1% to 92.4%, a difference of 7.3% (95% confidence interval [CI] for the difference 1.8 to 12.7). The revised user interface decreased the number of user interactions by 6.5 for intravenous documentation (95% CI 2.9 to 10.1) and by 16.1 for airway documentation (95% CI 11.1 to 21.1). The revised user interface required 3.8 fewer documentation steps (95% CI 2.3 to 5.4). Airway documentation time was reduced by 30.5 seconds with the revised workflow (95% CI 8.5 to 52.4). There were no significant time differences noted in intravenous documentation or in total task time. No difference in perceived workload was found between the user interfaces. Two user interface design problems were identified in the revised user interface. The usability of anesthesia information management systems can be evaluated using a low-fidelity simulated clinical environment. User testing of the revised user interface showed improvement in some usability metrics and highlighted areas for further revision. Vendors of AIMS and those who use them should consider adopting methods to evaluate and improve AIMS usability.

  17. Mobile application MDDCS for modeling the expansion dynamics of a dislocation loop in FCC metals

    NASA Astrophysics Data System (ADS)

    Kirilyuk, Vasiliy; Petelin, Alexander; Eliseev, Andrey

    2017-11-01

    A mobile version of the software package Dynamic Dislocation of Crystallographic Slip (MDDCS) designed for modeling the expansion dynamics of dislocation loops and formation of a crystallographic slip zone in FCC-metals is examined. The paper describes the possibilities for using MDDCS, the application interface, and the database scheme. The software has a simple and intuitive interface and does not require special training. The user can set the initial parameters of the experiment, carry out computational experiments, export parameters and results of the experiment into separate text files, and display the experiment results on the device screen.

  18. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  19. Expansion of Smartwatch Touch Interface from Touchscreen to Around Device Interface Using Infrared Line Image Sensors

    PubMed Central

    Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah

    2015-01-01

    Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user’s hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces. PMID:26184202

  20. Near infrared spectroscopy based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Ranganatha, Sitaram; Hoshi, Yoko; Guan, Cuntai

    2005-04-01

    A brain-computer interface (BCI) provides users with an alternative output channel other than the normal output path of the brain. BCI is being given much attention recently as an alternate mode of communication and control for the disabled, such as patients suffering from Amyotrophic Lateral Sclerosis (ALS) or "locked-in". BCI may also find applications in military, education and entertainment. Most of the existing BCI systems which rely on the brain's electrical activity use scalp EEG signals. The scalp EEG is an inherently noisy and non-linear signal. The signal is detrimentally affected by various artifacts such as the EOG, EMG, ECG and so forth. EEG is cumbersome to use in practice, because of the need for applying conductive gel, and the need for the subject to be immobile. There is an urgent need for a more accessible interface that uses a more direct measure of cognitive function to control an output device. The optical response of Near Infrared Spectroscopy (NIRS) denoting brain activation can be used as an alternative to electrical signals, with the intention of developing a more practical and user-friendly BCI. In this paper, a new method of brain-computer interface (BCI) based on NIRS is proposed. Preliminary results of our experiments towards developing this system are reported.

  1. Upper Body-Based Power Wheelchair Control Interface for Individuals with Tetraplegia

    PubMed Central

    Thorp, Elias B.; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica; Pierella, Camilla; Roth, Elliot J.; Gonzalez, Ismael Seanez; Mussa-Ivaldi, Ferdinando A.

    2016-01-01

    Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user’s residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional controls commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control the power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control. PMID:26054071

  2. Starting Over: Current Issues in Online Catalog User Interface Design.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1992-01-01

    Discussion of online catalogs focuses on issues in interface design. Issues addressed include understanding the user base; common user access (CUA) with personal computers; common command language (CCL); hyperlinks; screen design issues; differences from card catalogs; indexes; graphic user interfaces (GUIs); color; online help; and remote users.…

  3. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  4. ANALOG I/O MODULE TEST SYSTEM BASED ON EPICS CA PROTOCOL AND ACTIVEX CA INTERFACE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    YENG,YHOFF,L.

    2003-10-13

    Analog input (ADC) and output (DAC) modules play a substantial role in device level control of accelerator and large experiment physics control system. In order to get the best performance some features of analog modules including linearity, accuracy, crosstalk, thermal drift and so on have to be evaluated during the preliminary design phase. Gain and offset error calibration and thermal drift compensation (if needed) may have to be done in the implementation phase as well. A natural technique for performing these tasks is to interface the analog VO modules and GPIB interface programmable test instruments with a computer, which canmore » complete measurements or calibration automatically. A difficulty is that drivers of analog modules and test instruments usually work on totally different platforms (vxworks VS Windows). Developing new test routines and drivers for testing instruments under VxWorks (or any other RTOS) platform is not a good solution because such systems have relatively poor user interface and developing such software requires substantial effort. EPICS CA protocol and ActiveX CA interface provide another choice, a PC and LabVIEW based test system. Analog 110 module can be interfaced from LabVIEW test routines via ActiveX CA interface. Test instruments can be controlled via LabVIEW drivers, most of which are provided by instrument vendors or by National Instruments. Labview also provides extensive data analysis and process functions. Using these functions, users can generate powerful test routines very easily. Several applications built for Spallation Neutron Source (SNS) Beam Loss Monitor (BLM) system are described in this paper.« less

  5. A Monte Carlo software for the 1-dimensional simulation of IBIC experiments

    NASA Astrophysics Data System (ADS)

    Forneris, J.; Jakšić, M.; Pastuović, Ž.; Vittone, E.

    2014-08-01

    The ion beam induced charge (IBIC) microscopy is a valuable tool for the analysis of the electronic properties of semiconductors. In this work, a recently developed Monte Carlo approach for the simulation of IBIC experiments is presented along with a self-standing software equipped with graphical user interface. The method is based on the probabilistic interpretation of the excess charge carrier continuity equations and it offers to the end-user the full control not only of the physical properties ruling the induced charge formation mechanism (i.e., mobility, lifetime, electrostatics, device's geometry), but also of the relevant experimental conditions (ionization profiles, beam dispersion, electronic noise) affecting the measurement of the IBIC pulses. Moreover, the software implements a novel model for the quantitative evaluation of the radiation damage effects on the charge collection efficiency degradation of ion-beam-irradiated devices. The reliability of the model implementation is then validated against a benchmark IBIC experiment.

  6. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  7. ContextProvider: Context awareness for medical monitoring applications.

    PubMed

    Mitchell, Michael; Meyers, Christopher; Wang, An-I Andy; Tyson, Gary

    2011-01-01

    Smartphones are sensor-rich and Internet-enabled. With their on-board sensors, web services, social media, and external biosensors, smartphones can provide contextual information about the device, user, and environment, thereby enabling the creation of rich, biologically driven applications. We introduce ContextProvider, a framework that offers a unified, query-able interface to contextual data on the device. Unlike other context-based frameworks, ContextProvider offers interactive user feedback, self-adaptive sensor polling, and minimal reliance on third-party infrastructure. ContextProvider also allows for rapid development of new context and bio-aware applications. Evaluation of ContextProvider shows the incorporation of an additional monitoring sensor into the framework with fewer than 100 lines of Java code. With adaptive sensor monitoring, power consumption per sensor can be reduced down to 1% overhead. Finally, through the use of context, accuracy of data interpretation can be improved by up to 80%.

  8. Home security system using internet of things

    NASA Astrophysics Data System (ADS)

    Anitha, A.

    2017-11-01

    IoT refers to the infrastructure of connected physical devices which is growing at a rapid rate as huge number of devices and objects are getting associated to the Internet. Home security is a very useful application of IoT and we are using it to create an inexpensive security system for homes as well as industrial use. The system will inform the owner about any unauthorized entry or whenever the door is opened by sending a notification to the user. After the user gets the notification, he can take the necessary actions. The security system will use a microcontroller known as Arduino Uno to interface between the components, a magnetic Reed sensor to monitor the status, a buzzer for sounding the alarm, and a WiFi module, ESP8266 to connect and communicate using the Internet. The main advantages of such a system includes the ease of setting up, lower costs and low maintenance.

  9. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  10. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  11. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  12. Trends in communicative access solutions for children with cerebral palsy.

    PubMed

    Myrden, Andrew; Schudlo, Larissa; Weyand, Sabine; Zeyl, Timothy; Chau, Tom

    2014-08-01

    Access solutions may facilitate communication in children with limited functional speech and motor control. This study reviews current trends in access solution development for children with cerebral palsy, with particular emphasis on the access technology that harnesses a control signal from the user (eg, movement or physiological change) and the output device (eg, augmentative and alternative communication system) whose behavior is modulated by the user's control signal. Access technologies have advanced from simple mechanical switches to machine vision (eg, eye-gaze trackers), inertial sensing, and emerging physiological interfaces that require minimal physical effort. Similarly, output devices have evolved from bulky, dedicated hardware with limited configurability, to platform-agnostic, highly personalized mobile applications. Emerging case studies encourage the consideration of access technology for all nonverbal children with cerebral palsy with at least nascent contingency awareness. However, establishing robust evidence of the effectiveness of the aforementioned advances will require more expansive studies. © The Author(s) 2014.

  13. Integrated Model for E-Learning Acceptance

    NASA Astrophysics Data System (ADS)

    Ramadiani; Rodziah, A.; Hasan, S. M.; Rusli, A.; Noraini, C.

    2016-01-01

    E-learning is not going to work if the system is not used in accordance with user needs. User Interface is very important to encourage using the application. Many theories had discuss about user interface usability evaluation and technology acceptance separately, actually why we do not make it correlation between interface usability evaluation and user acceptance to enhance e-learning process. Therefore, the evaluation model for e-learning interface acceptance is considered important to investigate. The aim of this study is to propose the integrated e-learning user interface acceptance evaluation model. This model was combined some theories of e-learning interface measurement such as, user learning style, usability evaluation, and the user benefit. We formulated in constructive questionnaires which were shared at 125 English Language School (ELS) students. This research statistics used Structural Equation Model using LISREL v8.80 and MANOVA analysis.

  14. A Framework for Analyzing and Testing the Performance of Software Services

    NASA Astrophysics Data System (ADS)

    Bertolino, Antonia; de Angelis, Guglielmo; di Marco, Antinisca; Inverardi, Paola; Sabetta, Antonino; Tivoli, Massimo

    Networks "Beyond the 3rd Generation" (B3G) are characterized by mobile and resource-limited devices that communicate through different kinds of network interfaces. Software services deployed in such networks shall adapt themselves according to possible execution contexts and requirement changes. At the same time, software services have to be competitive in terms of the Quality of Service (QoS) provided, or perceived by the end user.

  15. Monitor Network Traffic with Packet Capture (pcap) on an Android Device

    DTIC Science & Technology

    2015-09-01

    administrative privileges . Under the current design Android development requirement, an Android Graphical User Interface (GUI) application cannot directly...build an Android application to monitor network traffic using open source packet capture (pcap) libraries. 15. SUBJECT TERMS ELIDe, Android , pcap 16...Building Application with Native Codes 5 8.1 Calling Native Codes Using JNI 5 8.2 Calling Native Codes from an Android Application 8 9. Retrieve Live

  16. Experimental Investigation and Numerical Predication of a Cross-Flow Fan

    DTIC Science & Technology

    2006-12-01

    Figure 3. Combination probes and pressure tap layout .....................................................6 Figure 4. CFF_DAQ graphical user interface...properties were United Sensor Devices model USD-C-161 3 mm (1/8-inch) combination thermocouple/pressure probes, and static pressure taps . The...was applied to the three static pressure tapes at the throat of the bell-mouth and to the two exhaust duct static pressure taps . Once the data

  17. Android Based Behavioral Biometric Authentication via Multi-Modal Fusion

    DTIC Science & Technology

    2014-06-12

    such as the way he or she uses the mouse, or interacts with the Graphical User Interface (GUI) [9]. Described simply, standard biometrics is determined...as a login screen on a standard computer. Active authentication is authentication that occurs dynamically throughout interaction with the device. A...because they are higher level constructs in themselves. The Android framework was specifically used for capturing the multitouch gestures: pinch and zoom

  18. The Impact of User-Input Devices on Virtual Desktop Trainers

    DTIC Science & Technology

    2010-09-01

    playing the game more enjoyable. Some of these changes include the design of controllers, the controller interface, and ergonomic changes made to...within subjects experimental design to evaluate young active duty Soldier’s ability to move and shoot in a virtual environment using different input...sufficient gaming proficiency, resulting in more time dedicated to training military skills. We employed a within subjects experimental design to

  19. The further development of the active urine collection device: a novel continence management system.

    PubMed

    Tinnion, E; Jowitt, F; Clarke-O'Neill, S; Cottenden, A M; Fader, M; Sutherland, I

    2003-01-01

    Continence difficulties affect the lives of a substantial minority of the population. Women are far more likely than men to be affected by urinary incontinence but the range of management options for them is limited. There has been considerable interest in developing an external urine collection system for women but without success to date. This paper describes the development and preliminary clinical testing of an active urine collection device (AUCD), which could provide a solution for sufferers. The device uses stored vacuum, protected by a high bubble point filter, to remove urine as quickly as it is produced. This allows a small battery-operated pump to provide the required vacuum, enabling the device to be portable. Two different types of non-invasive patient/device interface were developed, and tested by volunteers: urinal and small pad. The slimline urinal was popular with users although liquid noise was a problem. The pad interface was successful on occasions but further work is necessary to produce a reliable pad. This study has successfully demonstrated that a prototype AUCD liquid handling system can remove urine at clinically relevant flowrates. While further development is required, volunteer tests have shown that the AUCD could be a useful advance in continence management.

  20. User Interface Technology for Formal Specification Development

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.

  1. User-Adapted Recommendation of Content on Mobile Devices Using Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Iwasaki, Hirotoshi; Mizuno, Nobuhiro; Hara, Kousuke; Motomura, Yoichi

    Mobile devices, such as cellular phones and car navigation systems, are essential to daily life. People acquire necessary information and preferred content over communication networks anywhere, anytime. However, usability issues arise from the simplicity of user interfaces themselves. Thus, a recommendation of content that is adapted to a user's preference and situation will help the user select content. In this paper, we describe a method to realize such a system using Bayesian networks. This user-adapted mobile system is based on a user model that provides recommendation of content (i.e., restaurants, shops, and music that are suitable to the user and situation) and that learns incrementally based on accumulated usage history data. However, sufficient samples are not always guaranteed, since a user model would require combined dependency among users, situations, and contents. Therefore, we propose the LK method for modeling, which complements incomplete and insufficient samples using knowledge data, and CPT incremental learning for adaptation based on a small number of samples. In order to evaluate the methods proposed, we applied them to restaurant recommendations made on car navigation systems. The evaluation results confirmed that our model based on the LK method can be expected to provide better generalization performance than that of the conventional method. Furthermore, our system would require much less operation than current car navigation systems from the beginning of use. Our evaluation results also indicate that learning a user's individual preference through CPT incremental learning would be beneficial to many users, even with only a few samples. As a result, we have developed the technology of a system that becomes more adapted to a user the more it is used.

  2. The MAGIC Touch: Combining MAGIC-Pointing with a Touch-Sensitive Mouse

    NASA Astrophysics Data System (ADS)

    Drewes, Heiko; Schmidt, Albrecht

    In this paper, we show how to use the combination of eye-gaze and a touch-sensitive mouse to ease pointing tasks in graphical user interfaces. A touch of the mouse positions the mouse pointer at the current gaze position of the user. Thus, the pointer is always at the position where the user expects it on the screen. This approach changes the user experience in tasks that include frequent switching between keyboard and mouse input (e.g. working with spreadsheets). In a user study, we compared the touch-sensitive mouse with a traditional mouse and observed speed improvements for pointing tasks on complex backgrounds. For pointing task on plain backgrounds, performances with both devices were similar, but users perceived the gaze-sensitive interaction of the touch-sensitive mouse as being faster and more convenient. Our results show that using a touch-sensitive mouse that positions the pointer on the user’s gaze position reduces the need for mouse movements in pointing tasks enormously.

  3. How to Develop a User Interface That Your Real Users Will Love

    ERIC Educational Resources Information Center

    Phillips, Donald

    2012-01-01

    A "user interface" is the part of an interactive system that bridges the user and the underlying functionality of the system. But people sometimes forget that the best interfaces will provide a platform to optimize the users' interactions so that they support and extend the users' activities in effective, useful, and usable ways. To look at it…

  4. Make E-Learning Effortless! Impact of a Redesigned User Interface on Usability through the Application of an Affordance Design Approach

    ERIC Educational Resources Information Center

    Park, Hyungjoo; Song, Hae-Deok

    2015-01-01

    Given that a user interface interacts with users, a critical factor to be considered in improving the usability of an e-learning user interface is user-friendliness. Affordances enable users to more easily approach and engage in learning tasks because they strengthen positive, activating emotions. However, most studies on affordances limit…

  5. Computer-Based Tools for Evaluating Graphical User Interfaces

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  6. Role-Based And Adaptive User Interface Designs In A Teledermatology Consult System: A Way To Secure And A Way To Enhance

    PubMed Central

    Lin, Yi-Jung; Speedie, Stuart

    2003-01-01

    User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419

  7. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  8. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  9. Designing the user interface: strategies for effective human-computer interaction

    NASA Astrophysics Data System (ADS)

    Shneiderman, B.

    1998-03-01

    In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.

  10. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    NASA Astrophysics Data System (ADS)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  12. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    PubMed

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  13. Distributed smart device for monitoring, control and management of electric loads in domotic environments.

    PubMed

    Morales, Ricardo; Badesa, Francisco J; García-Aracil, Nicolas; Perez-Vidal, Carlos; Sabater, Jose María

    2012-01-01

    This paper presents a microdevice for monitoring, control and management of electric loads at home. The key idea is to compact the electronic design as much as possible in order to install it inside a Schuko socket. Moreover, the electronic Schuko socket (electronic microdevice + Schuko socket) has the feature of communicating with a central unit and with other microdevices over the existing powerlines. Using the existing power lines, the proposed device can be installed in new buildings or in old ones. The main use of this device is to monitor, control and manage electric loads to save energy and prevent accidents produced by different kind of devices (e.g., iron) used in domestic tasks. The developed smart device is based on a single phase multifunction energy meter manufactured by Analog Devices (ADE7753) to measure the consumption of electrical energy and then to transmit it using a serial interface. To provide current measurement information to the ADE7753, an ultra flat SMD open loop integrated circuit current transducer based on the Hall effect principle manufactured by Lem (FHS-40P/SP600) has been used. Moreover, each smart device has a PL-3120 smart transceiver manufactured by LonWorks to execute the user's program, to communicate with the ADE7753 via serial interface and to transmit information to the central unit via powerline communication. Experimental results show the exactitude of the measurements made using the developed smart device.

  14. Improved head direction command classification using an optimised Bayesian neural network.

    PubMed

    Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James

    2006-01-01

    Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.

  15. A Mobile Virtual Butler to Bridge the Gap between Users and Ambient Assisted Living: A Smart Home Case Study

    PubMed Central

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-01-01

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc. PMID:25102342

  16. A mobile Virtual Butler to bridge the gap between users and ambient assisted living: a Smart Home case study.

    PubMed

    Costa, Nuno; Domingues, Patricio; Fdez-Riverola, Florentino; Pereira, António

    2014-08-06

    Ambient Intelligence promises to transform current spaces into electronic environments that are responsive, assistive and sensitive to human presence. Those electronic environments will be fully populated with dozens, hundreds or even thousands of connected devices that share information and thus become intelligent. That massive wave of electronic devices will also invade everyday objects, turning them into smart entities, keeping their native features and characteristics while seamlessly promoting them to a new class of thinking and reasoning everyday objects. Although there are strong expectations that most of the users' needs can be fulfilled without their intervention, there are still situations where interaction is required. This paper presents work being done in the field of human-computer interaction, focusing on smart home environments, while being a part of a larger project called Aging Inside a Smart Home. This initiative arose as a way to deal with a large scourge of our country, where lots of elderly persons live alone in their homes, often with limited or no physical mobility. The project relies on the mobile agent computing paradigm in order to create a Virtual Butler that provides the interface between the elderly and the smart home infrastructure. The Virtual Butler is receptive to user questions, answering them according to the context and knowledge of the AISH. It is also capable of interacting with the user whenever it senses that something has gone wrong, notifying next of kin and/or medical services, etc. The Virtual Butler is aware of the user location and moves to the computing device which is closest to the user, in order to be always present. Its avatar can also run in handheld devices keeping its main functionality in order to track user when s/he goes out. According to the evaluation carried out, the Virtual Butler is assessed as a very interesting and loved digital friend, filling the gap between the user and the smart home. The evaluation also showed that the Virtual Butler concept can be easily ported to other types of possible smart and assistive environments like airports, hospitals, shopping malls, offices, etc.

  17. UNIPIC code for simulations of high power microwave devices

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze

    2009-03-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  18. Lunar Mapping and Modeling On-the-Go: A mobile framework for viewing and interacting with large geospatial datasets

    NASA Astrophysics Data System (ADS)

    Chang, G.; Kim, R.; Bui, B.; Sadaqathullah, S.; Law, E.; Malhotra, S.

    2012-12-01

    The Lunar Mapping and Modeling Portal (LMMP, https://www.lmmp.nasa.gov/) is a collaboration between four NASA centers, JPL, Marshall, Goddard, and Ames, along with the USGS and US Army to provide a centralized geospatial repository for storing processed lunar data collected from the Apollo missions to the latest data acquired by the Lunar Reconnaissance Orbiter (LRO). We offer various scientific and visualization tools to analyze rock and crater densities, lighting maps, thermal measurements, mineral concentrations, slope hazards, and digital elevation maps with the intention of serving not only scientists and lunar mission planners, but also the general public. The project has pioneered in leveraging new technologies and embracing new computing paradigms to create a system that is sophisticated, secure, robust, and scalable all the while being easy to use, streamlined, and modular. We have led innovations through the use of a hybrid cloud infrastructure, authentication through various sources, and utilizing an in-house GIS framework, TWMS (TiledWMS) as well as the commercial ArcGIS product from ESRI. On the client end, we also provide a Flash GUI framework as well as REST web services to interact with the portal. We have also developed a visualization framework on mobile devices, specifically Apple's iOS, which allows anyone from anywhere to interact with LMMP. At the most basic level, the framework allows users to browse LMMP's entire catalog of over 600 data imagery products ranging from global basemaps to LRO's Narrow Angle Camera (NAC) images that provide details of up to .5 meters/pixel. Users are able to view map metadata and can zoom in and out as well as pan around the entire lunar surface with the appropriate basemap. They can arbitrarily stack the maps and images on top of each other to show a layered view of the surface with layer transparency adjusted to suit the user's desired look. Once the user has selected a combination of layers, he can also bookmark those layers for quick access in subsequent sessions. A search tool is also provided to allow users to quickly find points of interests on the moon and to view the auxiliary data associated with that feature. More advanced features include the ability to interact with the data. Using the services provided by the portal, users will be able to log in and access the same scientific analysis tools provided on the web site including measuring between two points, generating subsets, and running other analysis tools, all by using a customized touch interface that are immediately familiar to users of these smart mobile devices. Users can also access their own storage on the portal and view or send the data to other users. Finally, there are features that will utilize functionality that can only be enabled by mobile devices. This includes the use of the gyroscopes and motion sensors to provide a haptic interface visualize lunar data in 3D, on the device as well as potentially on a large screen. The mobile framework that we have developed for LMMP provides a glimpse of what is possible in visualizing and manipulating large geospatial data on small portable devices. While the framework is currently tuned to our portal, we hope that we can generalize the tool to use data sources from any type of GIS services.

  19. A New Concept of Controller for Accelerators' Magnet Power Supplies

    NASA Astrophysics Data System (ADS)

    Visintini, Roberto; Cleva, Stefano; Cautero, Marco; Ciesla, Tomasz

    2016-04-01

    The complexity of a particle accelerator implies the remote control of very large numbers of devices, with many different typologies, either distributed along the accelerator or concentrated in locations, often far away from each other. Local and global control systems handle the devices through dedicated communication channels and interfaces. Each controlled device is practically a “smart node” performing a specific task. In addition, very often, those tasks are managed in real-time mode. The performances required to the control interface has an influence on the cost of the distributed nodes as well as on their hardware and software implementation. In large facilities (e.g. CERN) the “smart nodes” derive from specific in-house developments. Alternatively, it is possible to find on the market commercial devices, whose performances (and prices) are spread over a broad range, and spanning from proprietary design (customizable to the user's needs) to open source/design. In this paper, we will describe some applications of smart nodes in the particle accelerators field, with special focus on the power supplies for magnets. In modern accelerators, in fact, magnets and their associated power supplies constitute systems distributed along the accelerator itself, and strongly interfaced with the remote control system as well as with more specific (and often more demanding) orbit/trajectory feedback systems. We will give examples of actual systems, installed and operational on two light sources, Elettra and FERMI, located in the Elettra Research Center in Trieste, Italy.

  20. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  1. Optimization of Microelectronic Devices for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    2000-01-01

    The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.

  2. Adaptive Interfaces

    DTIC Science & Technology

    1990-11-01

    to design and implement an adaptive intelligent interface for a command-and-control-style domain. The primary functionality of the resulting...technical tasks, as follows: 1. Analysis of Current Interface Technologies 2. Dejineation of User Roles 3. Development of User Models 4. Design of Interface...Management Association (FEMA). In the initial version of the prototype, two distin-t user models were designed . One type of user modeled by the system is

  3. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  4. Voice emotion perception and production in cochlear implant users.

    PubMed

    Jiam, N T; Caldwell, M; Deroche, M L; Chatterjee, M; Limb, C J

    2017-09-01

    Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Design and evaluation of a telemonitoring concept based on NFC-enabled mobile phones and sensor devices.

    PubMed

    Morak, Jürgen; Kumpusch, Hannes; Hayn, Dieter; Modre-Osprian, Robert; Schreier, Günter

    2012-01-01

    Utilization of information and communication technologies such as mobile phones and wireless sensor networks becomes more and more common in the field of telemonitoring for chronic diseases. Providing elderly people with a mobile-phone-based patient terminal requires a barrier-free design of the overall user interface including the setup of wireless communication links to sensor devices. To easily manage the connection between a mobile phone and wireless sensor devices, a concept based on the combination of Bluetooth and near-field communication technology has been developed. It allows us initiating communication between two devices just by bringing them close together for a few seconds without manually configuring the communication link. This concept has been piloted with a sensor device and evaluated in terms of usability and feasibility. Results indicate that this solution has the potential to simplify the handling of wireless sensor networks for people with limited technical skills.

  7. Compact and portable digitally controlled device for testing footwear materials: technical note.

    PubMed

    Foto, James G

    2008-01-01

    Little or no practical decision-making data are available to the foot-care provider regarding the selection of orthotic materials used in therapeutic footwear. A device for simulating in-shoe forefoot conditions for the testing of orthosis materials is described. Materials are tested for their effectiveness by evaluating and comparing stress-strain and dynamic compression fatigue characteristics. The device, called the Cyclical Compression Tester (CCT), has been optimized for size, simplicity of construction, and cost. Application of the device ranges from the clinician deciding the useful life of single- and multidensity orthosis materials to the researcher characterizing materials for finite-element analysis modeling. This real-time CCT device and custom user interface combine to make an evaluation tool useful for testing how the pressure distribution of in-shoe materials changes over time in therapeutic footwear for those with peripheral neuropathy at risk for foot injury.

  8. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.

  9. Development of a Multi-Agent m-Health Application Based on Various Protocols for Chronic Disease Self-Management.

    PubMed

    Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun

    2016-01-01

    The purpose of this study was to develop and evaluate a mobile health application (Self-Management mobile Personal Health Record: "SmPHR") to ensure the interoperability of various personal health devices (PHDs) and electronic medical record systems (EMRs) for continuous self-management of chronic disease patients. The SmPHR was developed for Android 4.0.3, and implemented according to the optimized standard protocol for each interface of healthcare services adopted by the Continua Health Alliance (CHA). That is, the Personal Area Network (PAN) interface between the application and PHD implements ISO/IEEE 11073-20,601, 10,404, 10,407, 10,415, 10,417, and Bluetooth Health Device Profile (HDP), and EMRs with a wide area network (WAN) interface implement HL7 V2.6; the Health Record Network (HRN) interface implements Continuity of Care Document (CCD) and Continuity of Care Record (CCR). Also, for SmPHR, we evaluated the transmission error rate between the interface using four PHDs and personal health record systems (PHRs) from previous research, with 611 users and elderly people after receiving institutional review board (IRB) approval. In the evaluation, the PAN interface showed 15 (2.4 %) errors, and the WAN and HRN interface showed 13 (2.1 %) errors in a total of 611 transmission attempts. Also, we received opinions regarding SmPHR from 15 healthcare professionals who took part in the clinical trial. Thus, SmPHR can be provided as an interconnected PHR mobile health service to patients, allowing 'plug and play' of PHDs and EMRs through various standard protocols.

  10. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  11. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  12. The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.

    PubMed

    Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2015-04-01

    In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.

  13. Drinking from the Fire Hose: Why the Flight Management System Can Be Hard to Train and Difficult to Use

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Feary, Michael; Polson, Peter; Fennell, Karl

    2003-01-01

    The Flight Management Computer (FMC) and its interface, the Multi-function Control and Display Unit (MCDU) have been identified by researchers and airlines as difficult to train and use. Specifically, airline pilots have described the "drinking from the fire-hose" effect during training. Previous research has identified memorized action sequences as a major factor in a user s ability to learn and operate complex devices. This paper discusses the use of a method to examine the quantity of memorized action sequences required to perform a sample of 102 tasks, using features of the Boeing 777 Flight Management Computer Interface. The analysis identified a large number of memorized action sequences that must be learned during training and then recalled during line operations. Seventy-five percent of the tasks examined require recall of at least one memorized action sequence. Forty-five percent of the tasks require recall of a memorized action sequence and occur infrequently. The large number of memorized action sequences may provide an explanation for the difficulties in training and usage of the automation. Based on these findings, implications for training and the design of new user-interfaces are discussed.

  14. Wireless physiological monitoring system for psychiatric patients.

    PubMed

    Rademeyer, A J; Blanckenberg, M M; Scheffer, C

    2009-01-01

    Patients in psychiatric hospitals that are sedated or secluded are at risk of death or injury if they are not continuously monitored. Some psychiatric patients are restless and aggressive, and hence the monitoring device should be robust and must transmit the data wirelessly. Two devices, a glove that measures oxygen saturation and a dorsally-mounted device that measures heart rate, skin temperature and respiratory rate were designed and tested. Both devices connect to one central monitoring station using two separate Bluetooth connections, ensuring a completely wireless setup. A Matlab graphical user interface (GUI) was developed for signal processing and monitoring of the vital signs of the psychiatric patient. Detection algorithms were implemented to detect ECG arrhythmias such as premature ventricular contraction and atrial fibrillation. The prototypes were manufactured and tested in a laboratory setting on healthy volunteers.

  15. Display management subsystem, version 1: A user's eye view

    NASA Technical Reports Server (NTRS)

    Parker, Dolores

    1986-01-01

    The structure and application functions of the Display Management Subsystem (DMS) are described. The DMS, a subsystem of the Transportable Applications Executive (TAE), was designed to provide a device-independent interface for an image processing and display environment. The system is callable by C and FORTRAN applications, portable to accommodate different image analysis terminals, and easily expandable to meet local needs. Generic applications are also available for performing many image processing tasks.

  16. OpenSesame: an open-source, graphical experiment builder for the social sciences.

    PubMed

    Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-06-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  17. CARE 3 user-friendly interface user's guide

    NASA Technical Reports Server (NTRS)

    Martensen, A. L.

    1987-01-01

    CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.

  18. The Myosuit: Bi-articular Anti-gravity Exosuit That Reduces Hip Extensor Activity in Sitting Transfers.

    PubMed

    Schmidt, Kai; Duarte, Jaime E; Grimmer, Martin; Sancho-Puchades, Alejandro; Wei, Haiqi; Easthope, Chris S; Riener, Robert

    2017-01-01

    Muscle weakness-which can result from neurological injuries, genetic disorders, or typical aging-can affect a person's mobility and quality of life. For many people with muscle weakness, assistive devices provide the means to regain mobility and independence. These devices range from well-established technology, such as wheelchairs, to newer technologies, such as exoskeletons and exosuits. For assistive devices to be used in everyday life, they must provide assistance across activities of daily living (ADLs) in an unobtrusive manner. This article introduces the Myosuit, a soft, wearable device designed to provide continuous assistance at the hip and knee joint when working with and against gravity in ADLs. This robotic device combines active and passive elements with a closed-loop force controller designed to behave like an external muscle (exomuscle) and deliver gravity compensation to the user. At 4.1 kg (4.6 kg with batteries), the Myosuit is one of the lightest untethered devices capable of delivering gravity support to the user's knee and hip joints. This article presents the design and control principles of the Myosuit. It describes the textile interface, tendon actuators, and a bi-articular, synergy-based approach for continuous assistance. The assistive controller, based on bi-articular force assistance, was tested with a single subject who performed sitting transfers, one of the most gravity-intensive ADLs. The results show that the control concept can successfully identify changes in the posture and assist hip and knee extension with up to 26% of the natural knee moment and up to 35% of the knee power. We conclude that the Myosuit's novel approach to assistance using a bi-articular architecture, in combination with the posture-based force controller, can effectively assist its users in gravity-intensive ADLs, such as sitting transfers.

  19. The Myosuit: Bi-articular Anti-gravity Exosuit That Reduces Hip Extensor Activity in Sitting Transfers

    PubMed Central

    Schmidt, Kai; Duarte, Jaime E.; Grimmer, Martin; Sancho-Puchades, Alejandro; Wei, Haiqi; Easthope, Chris S.; Riener, Robert

    2017-01-01

    Muscle weakness—which can result from neurological injuries, genetic disorders, or typical aging—can affect a person's mobility and quality of life. For many people with muscle weakness, assistive devices provide the means to regain mobility and independence. These devices range from well-established technology, such as wheelchairs, to newer technologies, such as exoskeletons and exosuits. For assistive devices to be used in everyday life, they must provide assistance across activities of daily living (ADLs) in an unobtrusive manner. This article introduces the Myosuit, a soft, wearable device designed to provide continuous assistance at the hip and knee joint when working with and against gravity in ADLs. This robotic device combines active and passive elements with a closed-loop force controller designed to behave like an external muscle (exomuscle) and deliver gravity compensation to the user. At 4.1 kg (4.6 kg with batteries), the Myosuit is one of the lightest untethered devices capable of delivering gravity support to the user's knee and hip joints. This article presents the design and control principles of the Myosuit. It describes the textile interface, tendon actuators, and a bi-articular, synergy-based approach for continuous assistance. The assistive controller, based on bi-articular force assistance, was tested with a single subject who performed sitting transfers, one of the most gravity-intensive ADLs. The results show that the control concept can successfully identify changes in the posture and assist hip and knee extension with up to 26% of the natural knee moment and up to 35% of the knee power. We conclude that the Myosuit's novel approach to assistance using a bi-articular architecture, in combination with the posture-based force controller, can effectively assist its users in gravity-intensive ADLs, such as sitting transfers. PMID:29163120

  20. Toward a practical mobile robotic aid system for people with severe physical disabilities.

    PubMed

    Regalbuto, M A; Krouskop, T A; Cheatham, J B

    1992-01-01

    A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.

  1. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less

  2. A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities

    PubMed Central

    Lanzone, Stefano; Riberto, Giulio; Stefanelli, Cesare; Tortonesi, Mauro

    2017-01-01

    The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT) environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA), a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators) may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds. PMID:29099745

  3. A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities.

    PubMed

    Bellavista, Paolo; Giannelli, Carlo; Lanzone, Stefano; Riberto, Giulio; Stefanelli, Cesare; Tortonesi, Mauro

    2017-11-03

    The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT) environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA), a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators) may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds.

  4. TangibleCubes — Implementation of Tangible User Interfaces through the Usage of Microcontroller and Sensor Technology

    NASA Astrophysics Data System (ADS)

    Setscheny, Stephan

    The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.

  5. User interface design principles for the SSM/PMAD automated power system

    NASA Technical Reports Server (NTRS)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  6. Combining factual and heuristic knowledge in knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Hull, Richard; Karr, Clark; Hosken, Bruce; Verhagen, William

    1992-01-01

    A knowledge acquisition technique that combines heuristic and factual knowledge represented as two hierarchies is described. These ideas were applied to the construction of a knowledge acquisition interface to the Expert System Analyst (OPERA). The goal of OPERA is to improve the operations support of the computer network in the space shuttle launch processing system. The knowledge acquisition bottleneck lies in gathering knowledge from human experts and transferring it to OPERA. OPERA's knowledge acquisition problem is approached as a classification problem-solving task, combining this approach with the use of factual knowledge about the domain. The interface was implemented in a Symbolics workstation making heavy use of windows, pull-down menus, and other user-friendly devices.

  7. A human factors approach to range scheduling for satellite control

    NASA Technical Reports Server (NTRS)

    Wright, Cameron H. G.; Aitken, Donald J.

    1991-01-01

    Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.

  8. Developing a Graphical User Interface for the ALSS Crop Planning Tool

    NASA Technical Reports Server (NTRS)

    Koehlert, Erik

    1997-01-01

    The goal of my project was to create a graphical user interface for a prototype crop scheduler. The crop scheduler was developed by Dr. Jorge Leon and Laura Whitaker for the ALSS (Advanced Life Support System) program. The addition of a system-independent graphical user interface to the crop planning tool will make the application more accessible to a wider range of users and enhance its value as an analysis, design, and planning tool. My presentation will demonstrate the form and functionality of this interface. This graphical user interface allows users to edit system parameters stored in the file system. Data on the interaction of the crew, crops, and waste processing system with the available system resources is organized and labeled. Program output, which is stored in the file system, is also presented to the user in performance-time plots and organized charts. The menu system is designed to guide the user through analysis and decision making tasks, providing some help if necessary. The Java programming language was used to develop this interface in hopes of providing portability and remote operation.

  9. Brain computer interfaces, a review.

    PubMed

    Nicolas-Alonso, Luis Fernando; Gomez-Gil, Jaime

    2012-01-01

    A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.

  10. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  11. Overview of Graphical User Interfaces.

    ERIC Educational Resources Information Center

    Hulser, Richard P.

    1993-01-01

    Discussion of graphical user interfaces for online public access catalogs (OPACs) covers the history of OPACs; OPAC front-end design, including examples from Indiana University and the University of Illinois; and planning and implementation of a user interface. (10 references) (EA)

  12. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  13. CLIPS application user interface for the PC

    NASA Technical Reports Server (NTRS)

    Jenkins, Jim; Holbrook, Rebecca; Shewhart, Mark; Crouse, Joey; Yarost, Stuart

    1991-01-01

    The majority of applications that utilize expert system development programs for their knowledge representation and inferencing capability require some form of interface with the end user. This interface is more than likely an interaction through the computer screen. When building an application the user interface can prove to be the most difficult and time consuming aspect to program. Commercial products currently exist which address this issue. To keep pace C Language Integrated Production System (CLIPS) will need to find a solution for their lack of an easy to use Application User Interface (AUI). This paper represents a survey of the DoD CLIPS' user community and provides the backbone of a possible solution.

  14. A user interface development tool for space science systems Transportable Applications Environment (TAE) Plus

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1990-01-01

    The Transportable Applications Environment Plus (TAE PLUS), developed at NASA's Goddard Space Flight Center, is a portable What You See Is What You Get (WYSIWYG) user interface development and management system. Its primary objective is to provide an integrated software environment that allows interactive prototyping and development that of user interfaces, as well as management of the user interface within the operational domain. Although TAE Plus is applicable to many types of applications, its focus is supporting user interfaces for space applications. This paper discusses what TAE Plus provides and how the implementation has utilized state-of-the-art technologies within graphic workstations, windowing systems and object-oriented programming languages.

  15. Design and development of data glove based on printed polymeric sensors and Zigbee networks for Human-Computer Interface.

    PubMed

    Tongrod, Nattapong; Lokavee, Shongpun; Watthanawisuth, Natthapol; Tuantranont, Adisorn; Kerdcharoen, Teerakiat

    2013-03-01

    Current trends in Human-Computer Interface (HCI) have brought on a wave of new consumer devices that can track the motion of our hands. These devices have enabled more natural interfaces with computer applications. Data gloves are commonly used as input devices, equipped with sensors that detect the movements of hands and communication unit that interfaces those movements with a computer. Unfortunately, the high cost of sensor technology inevitably puts some burden to most general users. In this research, we have proposed a low-cost data glove concept based on printed polymeric sensor to make pressure and bending sensors fabricated by a consumer ink-jet printer. These sensors were realized using a conductive polymer (poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) [PEDOT:PSS]) thin film printed on glossy photo paper. Performance of these sensors can be enhanced by addition of dimethyl sulfoxide (DMSO) into the aqueous dispersion of PEDOT:PSS. The concept of surface resistance was successfully adopted for the design and fabrication of sensors. To demonstrate the printed sensors, we constructed a data glove using such sensors and developed software for real time hand tracking. Wireless networks based on low-cost Zigbee technology were used to transfer data from the glove to a computer. To our knowledge, this is the first report on low cost data glove based on paper pressure sensors. This low cost implementation of both sensors and communication network as proposed in this paper should pave the way toward a widespread implementation of data glove for real-time hand tracking applications.

  16. OpenDrop: An Integrated Do-It-Yourself Platform for Personal Use of Biochips

    PubMed Central

    Alistar, Mirela; Gaudenz, Urs

    2017-01-01

    Biochips, or digital labs-on-chip, are developed with the purpose of being used by laboratory technicians or biologists in laboratories or clinics. In this article, we expand this vision with the goal of enabling everyone, regardless of their expertise, to use biochips for their own personal purposes. We developed OpenDrop, an integrated electromicrofluidic platform that allows users to develop and program their own bio-applications. We address the main challenges that users may encounter: accessibility, bio-protocol design and interaction with microfluidics. OpenDrop consists of a do-it-yourself biochip, an automated software tool with visual interface and a detailed technique for at-home operations of microfluidics. We report on two years of use of OpenDrop, released as an open-source platform. Our platform attracted a highly diverse user base with participants originating from maker communities, academia and industry. Our findings show that 47% of attempts to replicate OpenDrop were successful, the main challenge remaining the assembly of the device. In terms of usability, the users managed to operate their platforms at home and are working on designing their own bio-applications. Our work provides a step towards a future in which everyone will be able to create microfluidic devices for their personal applications, thereby democratizing parts of health care. PMID:28952524

  17. Solar research with ALMA: Czech node of European ARC as your user-support infrastructure

    NASA Astrophysics Data System (ADS)

    Bárta, M.; Skokić, I.; Brajša, R.; Czech ARC Node Team

    2017-08-01

    ALMA (Atacama Large Millimeter/sub-millimeter Array) is by far the largest project of current ground-based observational facilities in astronomy and astrophysics. It is built and operated in the world-wide cooperation (ESO, NRAO, NAOJ) at altitude of 5000m in the desert of Atacama, Chile. Because of its unprecedented capabilities, ALMA is considered as a cutting-edge research device in astrophysics with potential for many breakthrough discoveries in the next decade and beyond. In spite it is not exclusively solar-research dedicated instrument, science observations of the Sun are now possible and has recently started in the observing Cycle 4 (2016-2017). In order to facilitate user access to this top-class, but at the same moment very complicated device to researchers lacking technical expertise, a network of three ALMA Regional Centers (ARCs) has been formed in Europe, North America, and East Asia as a user-support infrastructure and interface between the observatory and users community. After short introduction to ALMA the roles of ARCs and hint how to utilize their services will be presented, with emphasis to the specific (and in Europe unique) mission of the Czech ARC node in solar research with ALMA. Finally, peculiarities of solar observations that demanded the development of the specific Solar ALMA Observing Modes will be discuss

  18. Human/Computer Interfacing in Educational Environments.

    ERIC Educational Resources Information Center

    Sarti, Luigi

    1992-01-01

    This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…

  19. Developing the Multimedia User Interface Component (MUSIC) for the Icarus Presentation System (IPS)

    DTIC Science & Technology

    1993-12-01

    AD-A276 341 In-House Report December 1993 DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) FOR THE ICARUS PRESENTATION SYSTEM (IPS) Ingrid...DATEs COVERED 7 December 1993 Ina-House Jun - Aug 93 4 TWLE AM SL1sM1E & FUNDING NUMBERS DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) PE...the Multimedia User Interface Component ( MUSIC ). This report documents the initial research, design and implementation of a prototype of the MUSIC

  20. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

Top