AQBE — QBE Style Queries for Archetyped Data
NASA Astrophysics Data System (ADS)
Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash
Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.
Matsubara, Takamitsu; Morimoto, Jun
2013-08-01
In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.
Representing Graphical User Interfaces with Sound: A Review of Approaches
ERIC Educational Resources Information Center
Ratanasit, Dan; Moore, Melody M.
2005-01-01
The inability of computer users who are visually impaired to access graphical user interfaces (GUIs) has led researchers to propose approaches for adapting GUIs to auditory interfaces, with the goal of providing access for visually impaired people. This article outlines the issues involved in nonvisual access to graphical user interfaces, reviews…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Farooq, Mohammad U.
1986-01-01
The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.
Supporting openEHR Java desktop application developers.
Kashfi, Hajar; Torgersson, Olof
2011-01-01
The openEHR community suggests that an appropriate approach for creating a graphical user interface for an openEHR-based application is to generate forms from the underlying archetypes and templates. However, current generation techniques are not mature enough to be able to produce high quality interfaces with good usability. Therefore, developing efficient ways to combine manually designed and developed interfaces to openEHR backends is an interesting alternative. In this study, a framework for binding a pre-designed graphical user interface to an openEHR-based backend is proposed. The proposed framework contributes to the set of options available for developers. In particular we believe that the approach of combining user interface components with an openEHR backend in the proposed way might be useful in situations where the quality of the user interface is essential and for creating small scale and experimental systems.
Study on user interface of pathology picture archiving and communication system.
Kim, Dasueran; Kang, Peter; Yun, Jungmin; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom
2014-01-01
It is necessary to improve the pathology workflow. A workflow task analysis was performed using a pathology picture archiving and communication system (pathology PACS) in order to propose a user interface for the Pathology PACS considering user experience. An interface analysis of the Pathology PACS in Seoul National University Hospital and a task analysis of the pathology workflow were performed by observing recorded video. Based on obtained results, a user interface for the Pathology PACS was proposed. Hierarchical task analysis of Pathology PACS was classified into 17 tasks including 1) pre-operation, 2) text, 3) images, 4) medical record viewer, 5) screen transition, 6) pathology identification number input, 7) admission date input, 8) diagnosis doctor, 9) diagnosis code, 10) diagnosis, 11) pathology identification number check box, 12) presence or absence of images, 13) search, 14) clear, 15) Excel save, 16) search results, and 17) re-search. And frequently used menu items were identified and schematized. A user interface for the Pathology PACS considering user experience could be proposed as a preliminary step, and this study may contribute to the development of medical information systems based on user experience and usability.
NASA Technical Reports Server (NTRS)
Moe, Karen L.; Perkins, Dorothy C.; Szczur, Martha R.
1987-01-01
The user support environment (USE) which is a set of software tools for a flexible standard interactive user interface to the Space Station systems, platforms, and payloads is described in detail. Included in the USE concept are a user interface language, a run time environment and user interface management system, support tools, and standards for human interaction methods. The goals and challenges of the USE are discussed as well as a methodology based on prototype demonstrations for involving users in the process of validating the USE concepts. By prototyping the key concepts and salient features of the proposed user interface standards, the user's ability to respond is greatly enhanced.
Emotion scents: a method of representing user emotions on GUI widgets
NASA Astrophysics Data System (ADS)
Cernea, Daniel; Weber, Christopher; Ebert, Achim; Kerren, Andreas
2013-01-01
The world of desktop interfaces has been dominated for years by the concept of windows and standardized user interface (UI) components. Still, while supporting the interaction and information exchange between the users and the computer system, graphical user interface (GUI) widgets are rather one-sided, neglecting to capture the subjective facets of the user experience. In this paper, we propose a set of design guidelines for visualizing user emotions on standard GUI widgets (e.g., buttons, check boxes, etc.) in order to enrich the interface with a new dimension of subjective information by adding support for emotion awareness as well as post-task analysis and decision making. We highlight the use of an EEG headset for recording the various emotional states of the user while he/she is interacting with the widgets of the interface. We propose a visualization approach, called emotion scents, that allows users to view emotional reactions corresponding to di erent GUI widgets without in uencing the layout or changing the positioning of these widgets. Our approach does not focus on highlighting the emotional experience during the interaction with an entire system, but on representing the emotional perceptions and reactions generated by the interaction with a particular UI component. Our research is motivated by enabling emotional self-awareness and subjectivity analysis through the proposed emotionenhanced UI components for desktop interfaces. These assumptions are further supported by an evaluation of emotion scents.
A parallel coordinates style interface for exploratory volume visualization.
Tory, Melanie; Potts, Simeon; Möller, Torsten
2005-01-01
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.
Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.
Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís
2010-01-01
This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.
Integrated Model for E-Learning Acceptance
NASA Astrophysics Data System (ADS)
Ramadiani; Rodziah, A.; Hasan, S. M.; Rusli, A.; Noraini, C.
2016-01-01
E-learning is not going to work if the system is not used in accordance with user needs. User Interface is very important to encourage using the application. Many theories had discuss about user interface usability evaluation and technology acceptance separately, actually why we do not make it correlation between interface usability evaluation and user acceptance to enhance e-learning process. Therefore, the evaluation model for e-learning interface acceptance is considered important to investigate. The aim of this study is to propose the integrated e-learning user interface acceptance evaluation model. This model was combined some theories of e-learning interface measurement such as, user learning style, usability evaluation, and the user benefit. We formulated in constructive questionnaires which were shared at 125 English Language School (ELS) students. This research statistics used Structural Equation Model using LISREL v8.80 and MANOVA analysis.
A case study on better iconographic design in electronic medical records' user interface.
Tasa, Umut Burcu; Ozcan, Oguzhan; Yantac, Asim Evren; Unluer, Ayca
2008-06-01
It is a known fact that there is a conflict between what users expect and what user interface designers create in the field of medical informatics along with other fields of interface design. The objective of the study is to suggest, from the 'design art' perspective, a method for improving the usability of an electronic medical record (EMR) interface. The suggestion is based on the hypothesis that the user interface of an EMR should be iconographic. The proposed three-step method consists of a questionnaire survey on how hospital users perceive concepts/terms that are going to be used in the EMR user interface. Then icons associated with the terms are designed by a designer, following a guideline which is prepared according to the results of the first questionnaire. Finally the icons are asked back to the target group for proof. A case study was conducted with 64 medical staff and 30 professional designers for the first questionnaire, and with 30 medical staff for the second. In the second questionnaire 7.53 icons out of 10 were matched correctly with a standard deviation of 0.98. Also, all icons except three were matched correctly in at least 83.3% of the forms. The proposed new method differs from the majority of previous studies which are based on user requirements by leaning on user experiments instead. The study demonstrated that the user interface of EMRs should be designed according to a guideline that results from a survey on users' experiences on metaphoric perception of the terms.
ERIC Educational Resources Information Center
Cho, Vincent; Cheng, T. C. Edwin; Lai, W. M. Jennifer
2009-01-01
While past studies on user-interface design focused on a particular system or application using the experimental approach, we propose a theoretical model to assess the impact of perceived user-interface design (PUID) on continued usage intention (CUI) of self-paced e-learning tools in general. We argue that the impact of PUID is mediated by two…
Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems.
Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika
2017-06-01
This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.
Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems
NASA Astrophysics Data System (ADS)
Abu-Alqumsan, Mohammad; Ebert, Felix; Peer, Angelika
2017-06-01
Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations.
Heymann, Michael; Degani, Asaf
2007-04-01
We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.
Intelligent user interface concept for space station
NASA Technical Reports Server (NTRS)
Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen
1986-01-01
The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.
On Abstractions and Simplifications in the Design of Human-Automation Interfaces
NASA Technical Reports Server (NTRS)
Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)
2001-01-01
This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.
On Abstractions and Simplifications in the Design of Human-Automation Interfaces
NASA Technical Reports Server (NTRS)
Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)
2002-01-01
This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.
User interfaces in space science instrumentation
NASA Astrophysics Data System (ADS)
McCalden, Alec John
This thesis examines user interaction with instrumentation in the specific context of space science. It gathers together existing practice in machine interfaces with a look at potential future usage and recommends a new approach to space science projects with the intention of maximising their science return. It first takes a historical perspective on user interfaces and ways of defining and measuring the science return of a space instrument. Choices of research methodology are considered. Implementation details such as the concepts of usability, mental models, affordance and presentation of information are described, and examples of existing interfaces in space science are given. A set of parameters for use in analysing and synthesizing a user interface is derived by using a set of case studies of diverse failures and from previous work. A general space science user analysis is made by looking at typical practice, and an interview plus persona technique is used to group users with interface designs. An examination is made of designs in the field of astronomical instrumentation interfaces, showing the evolution of current concepts and including ideas capable of sustaining progress in the future. The parameters developed earlier are then tested against several established interfaces in the space science context to give a degree of confidence in their use. The concept of a simulator that is used to guide the development of an instrument over the whole lifecycle is described, and the idea is proposed that better instrumentation would result from more efficient use of the resources available. The previous ideas in this thesis are then brought together to describe a proposed new approach to a typical development programme, with an emphasis on user interaction. The conclusion shows that there is significant room for improvement in the science return from space instrumentation by attention to the user interface.
Representation-based user interfaces for the audiovisual library of the year 2000
NASA Astrophysics Data System (ADS)
Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique
1995-03-01
The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.
An EMG-based robot control scheme robust to time-varying EMG signal features.
Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J
2010-05-01
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments
NASA Astrophysics Data System (ADS)
Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin
The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.
Tang, Muh-Chyun; Liu, Ying-Hsang; Wu, Wan-Ching
2013-09-01
Previous research has shown that information seekers in biomedical domain need more support in formulating their queries. A user study was conducted to evaluate the effectiveness of a metadata based query suggestion interface for PubMed bibliographic search. The study also investigated the impact of search task familiarity on search behaviors and the effectiveness of the interface. A real user, user search request and real system approach was used for the study. Unlike tradition IR evaluation, where assigned tasks were used, the participants were asked to search requests of their own. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. Several performance criteria were measured to assess the potential benefits of the experimental interface, including users' assessment of their original and eventual queries, the perceived usefulness of the interfaces, satisfaction with the search results, and the average relevance score of the saved records. The results show that, when searching for an unfamiliar topic, users were more likely to change their queries, indicating the effect of familiarity on search behaviors. The results also show that the interface scored higher on several of the performance criteria, such as the "goodness" of the queries, perceived usefulness, and user satisfaction. Furthermore, in line with our hypothesis, the proposed interface was relatively more effective when less familiar search requests were attempted. Results indicate that there is a selective compatibility between search familiarity and search interface. One implication of the research for system evaluation is the importance of taking into consideration task familiarity when assessing the effectiveness of interactive IR systems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A method of designing smartphone interface based on the extended user's mental model
NASA Astrophysics Data System (ADS)
Zhao, Wei; Li, Fengmin; Bian, Jiali; Pan, Juchen; Song, Song
2017-01-01
The user's mental model is the core guiding theory of product design, especially practical products. The essence of practical product is a tool which is used by users to meet their needs. Then, the most important feature of a tool is usability. The design method based on the user's mental model provides a series of practical and feasible theoretical guidance for improving the usability of the product according to the user's awareness of things. In this paper, we propose a method of designing smartphone interface based on the extended user's mental model according to further research on user groups. This approach achieves personalized customization of smartphone application interface and enhance application using efficiency.
A study of usability principles and interface design for mobile e-books.
Wang, Chao-Ming; Huang, Ching-Hua
2015-01-01
This study examined usability principles and interface designs in order to understand the relationship between the intentions of mobile e-book interface designs and users' perceptions. First, this study summarised 4 usability principles and 16 interface attributes, in order to conduct usability testing and questionnaire survey by referring to Nielsen (1993), Norman (2002), and Yeh (2010), who proposed the usability principles. Second, this study used the interviews to explore the perceptions and behaviours of user operations through senior users of multi-touch prototype devices. The results of this study are as follows: (1) users' behaviour of operating an interactive interface is related to user prior experience; (2) users' rating of the visibility principle is related to users' subjective perception but not related to user prior experience; however, users' ratings of the ease, efficiency, and enjoyment principles are related to user prior experience; (3) the interview survey reveals that the key attributes affecting users' behaviour of operating an interface include aesthetics, achievement, and friendliness. This study conducts experiments to explore the effects of users’ prior multi-touch experience on users’ behaviour of operating a mobile e-book interface and users’ rating of usability principles. Both qualitative and quantitative data analyses were performed. By applying protocol analysis, key attributes affecting users’ behaviour of operation were determined.
Tsai, Tsai-Hsuan; Chang, Hsien-Tsung; Chen, Yan-Jiun; Chang, Yung-Sheng
2017-01-01
The use of the Internet and social applications has many benefits for the elderly, but numerous investigations have shown that the elderly do not perceive online social networks as a friendly social environment. Therefore, TreeIt, a social application specifically designed for the elderly, was developed for this study. In the TreeIt application, seven mechanisms promoting social interaction were designed to allow older adults to use social networking sites (SNSs) to increase social connection, maintain the intensity of social connections and strengthen social experience. This study's main objective was to investigate how user interface design affects older people's intention and attitude related to using SNSs. Fourteen user interface evaluation heuristics proposed by Zhang et al. were adopted as the criteria to assess user interface usability and further grouped into three categories: system support, user interface design and navigation. The technology acceptance model was adopted to assess older people's intention and attitude related to using SNSs. One hundred and one elderly persons were enrolled in this study as subjects, and the results showed that all of the hypotheses proposed in this study were valid: system support and perceived usefulness had a significant effect on behavioral intention; user interface design and perceived ease of use were positively correlated with perceived usefulness; and navigation exerted an influence on perceived ease of use. The results of this study are valuable for the future development of social applications for the elderly.
CLIPS: A proposal for improved usability
NASA Technical Reports Server (NTRS)
Patton, Charles R.
1990-01-01
This paper proposes the enhancement of the CLIPS user interface to improve the over-all usability of the CLIPS development environment. It suggests some directions for the long term growth of the user interface, and discusses some specific strengths and weaknesses of the current CLIPS PC user interface. Every user of CLIPS shares a common experience: his/her first interaction with the system itself. As with any new language, between the process of installing CLIPS on the appropriate computer and the completion of a large application, an intensive learning process takes place. For those with extensive programming knowledge and LISP backgrounds, this experience may have been mostly interesting and pleasant. Being familiar with products that are similar to CLIPS in many ways, these users enjoy a relatively short training period with the product. Already familiar with many of the functions they wish to employ, experienced users are free to focus on the capabilities of CLIPS that make it uniquely useful within their working environment.
A hybrid brain-computer interface-based mail client.
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng
2013-01-01
Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.
A Hybrid Brain-Computer Interface-Based Mail Client
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng
2013-01-01
Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method. PMID:23690880
Improving 3D Character Posing with a Gestural Interface.
Kyto, Mikko; Dhinakaran, Krupakar; Martikainen, Aki; Hamalainen, Perttu
2017-01-01
The most time-consuming part of character animation is 3D character posing. Posing using a mouse is a slow and tedious task that involves sequences of selecting on-screen control handles and manipulating the handles to adjust character parameters, such as joint rotations and end effector positions. Thus, various 3D user interfaces have been proposed to make animating easier, but they typically provide less accuracy. The proposed interface combines a mouse with the Leap Motion device to provide 3D input. A usability study showed that users preferred the Leap Motion over a mouse as a 3D gestural input device. The Leap Motion drastically decreased the number of required operations and the task completion time, especially for novice users.
A Proposed Intelligent Policy-Based Interface for a Mobile eHealth Environment
NASA Astrophysics Data System (ADS)
Tavasoli, Amir; Archer, Norm
Users of mobile eHealth systems are often novices, and the learning process for them may be very time consuming. In order for systems to be attractive to potential adopters, it is important that the interface should be very convenient and easy to learn. However, the community of potential users of a mobile eHealth system may be quite varied in their requirements, so the system must be able to adapt easily to suit user preferences. One way to accomplish this is to have the interface driven by intelligent policies. These policies can be refined gradually, using inputs from potential users, through intelligent agents. This paper develops a framework for policy refinement for eHealth mobile interfaces, based on dynamic learning from user interactions.
Chu, Chia-Hui; Kuo, Ming-Chuan; Weng, Shu-Hui; Lee, Ting-Ting
2016-01-01
A user friendly interface can enhance the efficiency of data entry, which is crucial for building a complete database. In this study, two user interfaces (traditional pull-down menu vs. check boxes) are proposed and evaluated based on medical records with fever medication orders by measuring the time for data entry, steps for each data entry record, and the complete rate of each medical record. The result revealed that the time for data entry is reduced from 22.8 sec/record to 3.2 sec/record. The data entry procedures also have reduced from 9 steps in the traditional one to 3 steps in the new one. In addition, the completeness of medical records is increased from 20.2% to 98%. All these results indicate that the new user interface provides a more user friendly and efficient approach for data entry than the traditional interface.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Kaise, Mitsuru; Ozeki, Takeshi; Tsunakawa, Hirofumi; Tsunakawa, Kiyoshi; Takayanagi, Takashi; Suzuki, Nobuaki; Miwa, Jun; Ohta, Yasuhiko; Kanai, Koichi
1999-07-01
We have proposed a new user interface with workflow customization, implemented and evaluted in Endoscopy Department Mini-PACS that has been introduced and routinely used for two years at Toshiba General Hospital. We have set some task at endoscopy image acquisition units during examinations for two different types of user interfaces and compared performance. One is a command-button based operation using a remote control, and another is that with eight graphic buttons which are displayed on a CRT monitor and can be customized. Results of the two-year study show that mean number of input diagnosis codes per examination with graphic and customized interface is significantly greater than that with conventional interface. Also, mean time to complete one upper gastric endoscopy examination with new user interface is about 17 percent less than that with conventional interface. These result suggest that systems with the visualized and customized operation and feedback encourages physicians to use more functions and to compete tasks more efficiently than systems with conventional command-button based user interfaces.
Chang, Hsien-Tsung; Chen, Yan-Jiun; Chang, Yung-Sheng
2017-01-01
The use of the Internet and social applications has many benefits for the elderly, but numerous investigations have shown that the elderly do not perceive online social networks as a friendly social environment. Therefore, TreeIt, a social application specifically designed for the elderly, was developed for this study. In the TreeIt application, seven mechanisms promoting social interaction were designed to allow older adults to use social networking sites (SNSs) to increase social connection, maintain the intensity of social connections and strengthen social experience. This study’s main objective was to investigate how user interface design affects older people’s intention and attitude related to using SNSs. Fourteen user interface evaluation heuristics proposed by Zhang et al. were adopted as the criteria to assess user interface usability and further grouped into three categories: system support, user interface design and navigation. The technology acceptance model was adopted to assess older people’s intention and attitude related to using SNSs. One hundred and one elderly persons were enrolled in this study as subjects, and the results showed that all of the hypotheses proposed in this study were valid: system support and perceived usefulness had a significant effect on behavioral intention; user interface design and perceived ease of use were positively correlated with perceived usefulness; and navigation exerted an influence on perceived ease of use. The results of this study are valuable for the future development of social applications for the elderly. PMID:28837566
Design and validation of an improved graphical user interface with the 'Tool ball'.
Lee, Kuo-Wei; Lee, Ying-Chu
2012-01-01
The purpose of this research is introduce the design of an improved graphical user interface (GUI) and verifies the operational efficiency of the proposed interface. Until now, clicking the toolbar with the mouse is the usual way to operate software functions. In our research, we designed an improved graphical user interface - a tool ball that is operated by a mouse wheel to perform software functions. Several experiments are conducted to measure the time needed to operate certain software functions with the traditional combination of "mouse click + tool button" and the proposed integration of "mouse wheel + tool ball". The results indicate that the tool ball design can accelerate the speed of operating software functions, decrease the number of icons on the screen, and enlarge the applications of the mouse wheel. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
Vision based interface system for hands free control of an Intelligent Wheelchair.
Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi
2009-08-06
Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people.
iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones
NASA Astrophysics Data System (ADS)
Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2013-02-01
The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.
Intelligent Context-Aware and Adaptive Interface for Mobile LBS
Liu, Yanhong
2015-01-01
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077
Universal Design and the Smart Home.
Pennick, Tim; Hessey, Sue; Craigie, Roland
2016-01-01
The related concepts of Universal Design, Inclusive Design, and Design For All, all recognise that no one solution will fit the requirements of every possible user. This paper considers the extent to which current developments in smart home technology can help to reduce the numbers of users for whom mainstream technology is not sufficiently inclusive, proposing a flexible approach to user interface (UI) implementation focussed on the capabilities of the user. This implies development of the concepts underlying Universal Design to include the development of a flexible inclusive support infrastructure, servicing the requirements of individual users and their personalised user interface devices.
High-level user interfaces for transfer function design with semantics.
Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter
2006-01-01
Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.
ERIC Educational Resources Information Center
Cheon, Jongpil; Grant, Michael
2012-01-01
This study proposes a new instrument to measure cognitive load types related to user interface and demonstrates theoretical assumptions about different load types. In reconsidering established cognitive load theory, the inadequacies of the theory are criticized in terms of the adaption of learning efficiency score and distinction of cognitive load…
Design and evaluation of nonverbal sound-based input for those with motor handicapped.
Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong
2013-03-01
Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.
Vision based interface system for hands free control of an intelligent wheelchair
Ju, Jin Sun; Shin, Yunhee; Kim, Eun Yi
2009-01-01
Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs) have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW) control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1) accurate recognition of user's intention with minimal user motion and 2) robustness to a cluttered background and the time-varying illumination. To prove these advantages, the proposed system was tested with 34 users in indoor and outdoor environments and the results were compared with those of other systems, then the results showed that the proposed system has superior performance to other systems in terms of speed and accuracy. Therefore, it is proved that proposed system provided a friendly and convenient interface to the severely disabled people. PMID:19660132
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1985-01-01
The currently developed multi-level language interfaces of information systems are generally designed for experienced users. These interfaces commonly ignore the nature and needs of the largest user group, i.e., casual users. This research identifies the importance of natural language query system research within information storage and retrieval system development; addresses the topics of developing such a query system; and finally, proposes a framework for the development of natural language query systems in order to facilitate the communication between casual users and information storage and retrieval systems.
Cognitive Task Analysis, Interface Design, and Technical Troubleshooting.
ERIC Educational Resources Information Center
Steinberg, Linda S.; Gitomer, Drew H.
A model of the interface design process is proposed that makes use of two interdependent levels of cognitive analysis: the study of the criterion task through an analysis of expert/novice differences and the evaluation of the working user interface design through the application of a practical interface analysis methodology (GOMS model). This dual…
Support of surgical process modeling by using adaptable software user interfaces
NASA Astrophysics Data System (ADS)
Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.
2010-03-01
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Users' Interaction with World Wide Web Resources: An Exploratory Study Using a Holistic Approach.
ERIC Educational Resources Information Center
Wang, Peiling; Hawk, William B.; Tenopir, Carol
2000-01-01
Presents results of a study that explores factors of user-Web interaction in finding factual information, develops a conceptual framework for studying user-Web interaction, and applies a process-tracing method for conducting holistic user-Web studies. Describes measurement techniques and proposes a model consisting of the user, interface, and the…
Microcomputer Program Design Considerations for the Novice User
1987-03-01
relatively recent, widespread proliferation of microcomputers into both the home and work place has resulted in a shifting of computer operation and...design decisions be made with respect to both the requirements specifications and interface considerations. Since a project’s requirement...expertise, may be quite meaningless or confusing to the end user. It is therefore proposed that interface design decisions should be made under the assumption
Speech-recognition interfaces for music information retrieval
NASA Astrophysics Data System (ADS)
Goto, Masataka
2005-09-01
This paper describes two hands-free music information retrieval (MIR) systems that enable a user to retrieve and play back a musical piece by saying its title or the artist's name. Although various interfaces for MIR have been proposed, speech-recognition interfaces suitable for retrieving musical pieces have not been studied. Our MIR-based jukebox systems employ two different speech-recognition interfaces for MIR, speech completion and speech spotter, which exploit intentionally controlled nonverbal speech information in original ways. The first is a music retrieval system with the speech-completion interface that is suitable for music stores and car-driving situations. When a user only remembers part of the name of a musical piece or an artist and utters only a remembered fragment, the system helps the user recall and enter the name by completing the fragment. The second is a background-music playback system with the speech-spotter interface that can enrich human-human conversation. When a user is talking to another person, the system allows the user to enter voice commands for music playback control by spotting a special voice-command utterance in face-to-face or telephone conversations. Experimental results from use of these systems have demonstrated the effectiveness of the speech-completion and speech-spotter interfaces. (Video clips: http://staff.aist.go.jp/m.goto/MIR/speech-if.html)
Avatars and virtual agents – relationship interfaces for the elderly
2017-01-01
In the Digital Era, the authors witness a change in the relationship between the patient and the care-giver or Health Maintenance Organization's providing the health services. Another fact is the use of various technologies to increase the effectiveness and quality of health services across all primary and secondary users. These technologies range from telemedicine systems, decision making tools, online and self-services applications and virtual agents; all providing information and assistance. The common thread between all these digital implementations, is they all require human machine interfaces. These interfaces must be interactive, user friendly and inviting, to create user involvement and cooperation incentives. The challenge is to design interfaces which will best fit the target users and enable smooth interaction especially, for the elderly users. Avatars and Virtual Agents are one of the interfaces used for both home care monitoring and companionship. They are also inherently multimodal in nature and allow an intimate relation between the elderly users and the Avatar. This study discusses the need and nature of these relationship models, the challenges of designing for the elderly. The study proposes key features for the design and evaluation in the area of assistive applications using Avatar and Virtual agents for the elderly users. PMID:28706725
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents
Griol, David
2016-01-01
Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users. PMID:26819592
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.
da Silva de Queiroz Pierre, Raisa; Kawada, Tarô Arthur Tavares; Fontes, André Guimarães
2012-01-01
Develop a proposal of digital interface for the system of the remote control, that functions as support system during the manipulation of air conditioner adjusted for the users in general, from ergonomic parameters, objectifying the reduction of the problems faced for the user and improving the process. 20 people with questionnaire with both qualitative and quantitative level. Linear Method consists of a sequence of steps in which the input of one of them depends on the output from the previous one, although they are independent. The process of feedback, when necessary, must occur within each step separately.
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Usability engineering for augmented reality: employing user-based studies to inform design.
Gabbard, Joseph L; Swan, J Edward
2008-01-01
A major challenge, and thus opportunity, in the field of human-computer interaction and specifically usability engineering is designing effective user interfaces for emerging technologies that have no established design guidelines or interaction metaphors or introduce completely new ways for users to perceive and interact with technology and the world around them. Clearly, augmented reality is one such emerging technology. We propose a usability engineering approach that employs user-based studies to inform design, by iteratively inserting a series of user-based studies into a traditional usability engineering lifecycle to better inform initial user interface designs. We present an exemplar user-based study conducted to gain insight into how users perceive text in outdoor augmented reality settings and to derive implications for design in outdoor augmented reality. We also describe lessons learned from our experiences conducting user-based studies as part of the design process.
Chase, C R; Ashikaga, T; Mazuzan, J E
1994-07-01
The objective of our study was to assess the acceptability of a proposed user interface to visually interfaced computer-assisted anesthesia record (VISI-CAARE), before the application was begun. The user interface was defined as the user display and its user orientation methods. We designed methods to measure user performance and attitude toward two different anesthesia record procedures: (1) the traditional pen and paper anesthetic record procedure of our hospital, and (2) VISI-CAARE. Performance measurements included the reaction speed (identifying the type and time of an event) and completion speed (describing the event). Performance also included accuracy of the recorded time of the event and accuracy of the description. User attitude was measured by (1) the physician's rating on a scale of 0 to 9 of the potential usefulness of computers in anesthesia care; (2) willingness to use the future application in the clinical environment; and (3) user suggestions for change. These measurements were used in a randomized trial of 21 physicians, of which data from 20 were available. After exposure to VISI-CAARE, the experimental subjects' ranking of computer usefulness in anesthesia care improved significantly (4.2 +/- 1.1 to 7.6 +/- 1.5, p = 0.0001), as did controls' (5.2 +/- 2.6 to 8 +/- 1.5, p = 0.0019). All the volunteers were willing to try the proposed prototype clinically, when it was ready. VISI-CAARE exposure was associated with faster and more accurate reaction to events over the traditional pen and paper machine, and slower and more accurate description of events in an artificial mock setting. VISI-CAARE 1.1 demonstrated significant improvements in both reaction speed and completion speed over VISI-CAARE 1.0, after changes were made to the user display and orientation methods. With graphic user interface prototyping environments, one can obtain preliminary user attitude and performance data, even before application programming is begun. This may be helpful in revising initial display and orientation methods, while obtaining user interest and commitment before actual programming and clinical testing.
Automatic User Interface Generation for Visualizing Big Geoscience Data
NASA Astrophysics Data System (ADS)
Yu, H.; Wu, J.; Zhou, Y.; Tang, Z.; Kuo, K. S.
2016-12-01
Along with advanced computing and observation technologies, geoscience and its related fields have been generating a large amount of data at an unprecedented growth rate. Visualization becomes an increasingly attractive and feasible means for researchers to effectively and efficiently access and explore data to gain new understandings and discoveries. However, visualization has been challenging due to a lack of effective data models and visual representations to tackle the heterogeneity of geoscience data. We propose a new geoscience data visualization framework by leveraging the interface automata theory to automatically generate user interface (UI). Our study has the following three main contributions. First, geoscience data has its unique hierarchy data structure and complex formats, and therefore it is relatively easy for users to get lost or confused during their exploration of the data. By applying interface automata model to the UI design, users can be clearly guided to find the exact visualization and analysis that they want. In addition, from a development perspective, interface automaton is also easier to understand than conditional statements, which can simplify the development process. Second, it is common that geoscience data has discontinuity in its hierarchy structure. The application of interface automata can prevent users from suffering automation surprises, and enhance user experience. Third, for supporting a variety of different data visualization and analysis, our design with interface automata could also make applications become extendable in that a new visualization function or a new data group could be easily added to an existing application, which reduces the overhead of maintenance significantly. We demonstrate the effectiveness of our framework using real-world applications.
User interaction in smart ambient environment targeted for senior citizen.
Pulli, Petri; Hyry, Jaakko; Pouke, Matti; Yamamoto, Goshiro
2012-11-01
Many countries are facing a problem when the age-structure of the society is changing. The numbers of senior citizen are rising rapidly, and caretaking personnel numbers cannot match the problems and needs of these citizens. Using smart, ubiquitous technologies can offer ways in coping with the need of more nursing staff and the rising costs of taking care of senior citizens for the society. Helping senior citizens with a novel, easy to use interface that guides and helps, could improve their quality of living and make them participate more in daily activities. This paper presents a projection-based display system for elderly people with memory impairments and the proposed user interface for the system. The user's process recognition based on a sensor network is also described. Elderly people wearing the system can interact the projected user interface by tapping physical surfaces (such as walls, tables, or doors) using them as a natural, haptic feedback input surface.
A novel asynchronous access method with binary interfaces
2008-01-01
Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797
Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415
Reducing wrong patient selection errors: exploring the design space of user interface techniques.
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.
Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.
Rutkowski, Tomasz M; Mori, Hiromu
2015-04-15
The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.
The effect of visualizing the flow of multimedia content among and inside devices.
Lee, Dong-Seok
2009-05-01
This study introduces a user interface, referred to as the flow interface, which provides a graphical representation of the movement of content among and inside audio/video devices. The proposed interface provides a different frame of reference with content-oriented visualization of the generation, manipulation, storage, and display of content as well as input and output. The flow interface was applied to a VCR/DVD recorder combo, one of the most complicated consumer products. A between-group experiment was performed to determine whether the flow interface helps users to perform various tasks and to examine the learning effect of the flow interface, particularly in regard to hooking up and recording tasks. The results showed that participants with access to the flow interface performed better in terms of success rate and elapsed time. In addition, the participants indicated that they could easily understand the flow interface. The potential of the flow interface for application to other audio video devices, and design issues requiring further consideration, are discussed.
An EOG-Based Human-Machine Interface for Wheelchair Control.
Huang, Qiyun; He, Shenghong; Wang, Qihong; Gu, Zhenghui; Peng, Nengneng; Li, Kai; Zhang, Yuandong; Shao, Ming; Li, Yuanqing
2017-07-27
Non-manual human-machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG)-based HMI is proposed for wheelchair control. Thirteen flashing buttons are presented in the graphical user interface (GUI), and each of the buttons corresponds to a command. These buttons flash on a one-by-one manner in a pre-defined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. For healthy subjects/patients with spinal cord injuries (SCIs), the proposed HMI achieved an average accuracy of 96.7%/91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Using only one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. The proposed HMI provides a novel non-manual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR and fewer electrodes.
Content-based Music Search and Recommendation System
NASA Astrophysics Data System (ADS)
Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo
Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.
Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.
Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar
2016-05-01
Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.
Systematically evaluating interfaces for RNA-seq analysis from a life scientist perspective.
Poplawski, Alicia; Marini, Federico; Hess, Moritz; Zeller, Tanja; Mazur, Johanna; Binder, Harald
2016-03-01
RNA-sequencing (RNA-seq) has become an established way for measuring gene expression in model organisms and humans. While methods development for refining the corresponding data processing and analysis pipeline is ongoing, protocols for typical steps have been proposed and are widely used. Several user interfaces have been developed for making such analysis steps accessible to life scientists without extensive knowledge of command line tools. We performed a systematic search and evaluation of such interfaces to investigate to what extent these can indeed facilitate RNA-seq data analysis. We found a total of 29 open source interfaces, and six of the more widely used interfaces were evaluated in detail. Central criteria for evaluation were ease of configuration, documentation, usability, computational demand and reporting. No interface scored best in all of these criteria, indicating that the final choice will depend on the specific perspective of users and the corresponding weighting of criteria. Considerable technical hurdles had to be overcome in our evaluation. For many users, this will diminish potential benefits compared with command line tools, leaving room for future improvement of interfaces. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
ALMA from the Users' Perspective
NASA Astrophysics Data System (ADS)
Johnson, Kelsey
2010-05-01
After decades of dreaming and preparation, the call for early science with ALMA is just around the corner. The goal of this talk is to illustrate the process of preparing and carrying out a research program with ALMA. This presentation will step through the user interface for proposal preparation, proposal review, project tracking, data acquisition, and post-processing. Examples of the software tools, including the simulator and spectral line catalog, will be included.
NASA Technical Reports Server (NTRS)
Kirlik, Alex; Kossack, Merrick Frank
1993-01-01
This status report consists of a thesis entitled 'Ecological Task Analysis: A Method for Display Enhancements.' Previous use of various analysis processes for the purpose of display interface design or enhancement has run the risk of failing to improve user performance due to the analysis resulting in only a sequencial listing of user tasks. Adopting an ecological approach to performing the task analysis, however, may result in the necessary modeling of an unpredictable and variable task domain required to improve user performance. Kirlik has proposed an Ecological Task Analysis framework which is designed for this purpose. It is the purpose of this research to measure this framework's effectiveness at enhancing display interfaces in order to improve user performance. Following the proposed framework, an ecological task analysis of experienced users of a complex and dynamic laboratory task, Star Cruiser, was performed. Based on this analysis, display enhancements were proposed and implemented. An experiment was then conducted to compare this new version of Star Cruiser to the original. By measuring user performance at different tasks, it was determined that during early sessions, use of the enhanced display contributed to better user performance compared to that achieved using the original display. Furthermore, the results indicate that the enhancements proposed as a result of the ecological task analysis affected user performance differently depending on whether they are enhancements which aid in the selection of a possible action or in the performance of an action. Generalizations of these findings to larger, more complex systems were avoided since the analysis was only performed on this one particular system.
Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo
2018-05-01
Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.
Choi, Kup-Sze; Chan, Tak-Yin
2015-03-01
To investigate the feasibility of using tablet device as user interface for students with upper extremity disabilities to input mathematics efficiently into computer. A touch-input system using tablet device as user interface was proposed to assist these students to write mathematics. User-switchable and context-specific keyboard layouts were designed to streamline the input process. The system could be integrated with conventional computer systems only with minor software setup. A two-week pre-post test study involving five participants was conducted to evaluate the performance of the system and collect user feedback. The mathematics input efficiency of the participants was found to improve during the experiment sessions. In particular, their performance in entering trigonometric expressions by using the touch-input system was significantly better than that by using conventional mathematics editing software with keyboard and mouse. The participants rated the touch-input system positively and were confident that they could operate at ease with more practice. The proposed touch-input system provides a convenient way for the students with hand impairment to write mathematics and has the potential to facilitate their mathematics learning. Implications for Rehabilitation Students with upper extremity disabilities often face barriers to learning mathematics which is largely based on handwriting. Conventional computer user interfaces are inefficient for them to input mathematics into computer. A touch-input system with context-specific and user-switchable keyboard layouts was designed to improve the efficiency of mathematics input. Experimental results and user feedback suggested that the system has the potential to facilitate mathematics learning for the students.
NASA Astrophysics Data System (ADS)
Fukada, Hidemi; Kobayashi, Kazue; Satou, Kenji; Kawana, Hideyuki; Masuda, Tomohiro
Most traditional disaster information systems are necessary to post expert staff with high computer literacy to operate the system quickly and correctly in the tense situation when a disaster occurs. However, in the current disaster response system of local governments, it is not easy for local governments to post such expert staff because they are struggling with staff cuts due to administrative and fiscal reform. In this research, we propose a disaster information management system that can be easily operated, even under the disorderly conditions of a disaster, by municipal personnel in charge of disaster management. This system achieves usability enabling easy input of damage information, even by local government staff with no expertise, by using a digital pen and tabletop user interface. Evaluation was conducted by prospective users using a prototype, and the evaluation results are satisfactory with regard to the function and operationality of the proposed system.
Toward an Alternative Learning Environment Interface for Learning Management Systems
ERIC Educational Resources Information Center
Abdous, M'hammed
2013-01-01
An effective learning environment interface (LEI) is a means to enable students to focus on learning and to understand content, while establishing connections and relationships among course activities. Using this fundamental premise, we propose a flexible, user-centered, and seamless LEI which is intended to remediate the fragmented interface…
Proposal for a CLIPS software library
NASA Technical Reports Server (NTRS)
Porter, Ken
1991-01-01
This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.
Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A; Duro, Richard
2016-07-07
This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location.
ERIC Educational Resources Information Center
LaGuardia, Cheryl; Huber, Chuck
1992-01-01
Discusses proposals for more standardized practices in CD-ROM development, sales, and distribution. Topics discussed include availability of trial copies; pricing policies; installation, including software, instructions, and compatibility; interface procedures; manuals; and vendor support services. A sidebar discusses proposals being addressed by…
Carr, Eloise Cj; Babione, Julie N; Marshall, Deborah
2017-08-01
To identify the needs and requirements of the end users, to inform the development of a user-interface to translate an existing evidence-based decision support tool into a practical and usable interface for health service planning for osteoarthritis (OA) care. We used a user-centered design (UCD) approach that emphasized the role of the end-users and is well-suited to knowledge translation (KT). The first phase used a needs assessment focus group (n=8) and interviews (n=5) with target users (health care planners) within a provincial health care organization. The second phase used a participatory design approach, with two small group sessions (n=6) to explore workflow, thought processes, and needs of intended users. The needs assessment identified five design recommendations: ensuring the user-interface supports the target user group, allowing for user-directed data explorations, input parameter flexibility, clear presentation, and provision of relevant definitions. The second phase identified workflow insights from a proposed scenario. Graphs, the need for a visual overview of the data, and interactivity were key considerations to aid in meaningful use of the model and knowledge translation. A UCD approach is well suited to identify health care planners' requirements when using a decision support tool to improve health service planning and management of OA. We believe this is one of the first applications to be used in planning for health service delivery. We identified specific design recommendations that will increase user acceptability and uptake of the user-interface and underlying decision support tool in practice. Our approach demonstrated how UCD can be used to enable knowledge translation. Copyright © 2017 Elsevier B.V. All rights reserved.
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
A graphical, rule based robotic interface system
NASA Technical Reports Server (NTRS)
Mckee, James W.; Wolfsberger, John
1988-01-01
The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.
Interface methods for using intranet portal organizational memory information system.
Ji, Yong Gu; Salvendy, Gavriel
2004-12-01
In this paper, an intranet portal is considered as an information infrastructure (organizational memory information system, OMIS) supporting organizational learning. The properties and the hierarchical structure of information and knowledge in an intranet portal OMIS was identified as a problem for navigation tools of an intranet portal interface. The problem relates to navigation and retrieval functions of intranet portal OMIS and is expected to adversely affect user performance, satisfaction, and usefulness. To solve the problem, a conceptual model for navigation tools of an intranet portal interface was proposed and an experiment using a crossover design was conducted with 10 participants. In the experiment, a separate access method (tabbed tree tool) was compared to an unified access method (single tree tool). The results indicate that each information/knowledge repository for which a user has a different structural knowledge should be handled separately with a separate access to increase user satisfaction and the usefulness of the OMIS and to improve user performance in navigation.
A practical VEP-based brain-computer interface.
Wang, Yijun; Wang, Ruiping; Gao, Xiaorong; Hong, Bo; Gao, Shangkai
2006-06-01
This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to > 90% of people with a high ITR in living environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
... Change Regarding Providing Participants With a New Optional Settlement Web Interface February 22, 2011... Rule Change The proposed rule change will establish a new browser-based interface, the ``Settlement Web... Browser System (``PBS'').\\4\\ Based on request from its Participants, DTC has created a more user-friendly...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barton, Michael; Droge, Johannes; Belmann, Peter
2017-06-22
Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. The developers propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.
Interaction design challenges and solutions for ALMA operations monitoring and control
NASA Astrophysics Data System (ADS)
Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar
2012-09-01
The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.
Dialogue enabling speech-to-text user assistive agent system for hearing-impaired person.
Lee, Seongjae; Kang, Sunmee; Han, David K; Ko, Hanseok
2016-06-01
A novel approach for assisting bidirectional communication between people of normal hearing and hearing-impaired is presented. While the existing hearing-impaired assistive devices such as hearing aids and cochlear implants are vulnerable in extreme noise conditions or post-surgery side effects, the proposed concept is an alternative approach wherein spoken dialogue is achieved by means of employing a robust speech recognition technique which takes into consideration of noisy environmental factors without any attachment into human body. The proposed system is a portable device with an acoustic beamformer for directional noise reduction and capable of performing speech-to-text transcription function, which adopts a keyword spotting method. It is also equipped with an optimized user interface for hearing-impaired people, rendering intuitive and natural device usage with diverse domain contexts. The relevant experimental results confirm that the proposed interface design is feasible for realizing an effective and efficient intelligent agent for hearing-impaired.
An alternating pressure sequence proposal for an air-cell cushion for preventing pressure ulcers.
Arias, Sandra; Cardiel, Eladio; Rogeli, Pablo; Mori, Taketoshi; Nakagami, Gojiro; Noguchi, Hiroshi; Sanada, Hiromi
2014-01-01
The distribution and release of pressure on ischial regions are two important parameters for evaluating the effectiveness of a cushion; especially the release of pressure over time on ischial tuberosities, which is significant for preventing pressure ulcers. The aim of this work is to evaluate the effect on interface pressure through the application of a proposed alternating pressure sequence for an air-cell cushion. Six healthy volunteers were asked to sit on the air cell cushion, in static and alternating modes, as well as on a typical foam cushion for 12 minutes. Interface pressure was monitored with a matrix sensor system. Interface pressure values on ischial tuberosities, user contact area and pressure distribution were analyzed. Results showed that IP on IT tends to increase in both foam and static cushions, while in alternating cushion IP on IT tends to decrease. User contact area was significantly larger in alternating cushion than in static or foam cushions. Moreover, there is a better pressure re-distribution with alternating cushion than with the other cushions. The goal of the alternating sequence is to redistribute pressure and stimulate the ischial regions in order to promote blood flow and prevent pressure occurring in wheelchair users.
Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan
2016-01-01
Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.
Probabilistic vs linear blending approaches to shared control for wheelchair driving.
Ezeh, Chinemelu; Trautman, Pete; Devigne, Louise; Bureau, Valentin; Babel, Marie; Carlson, Tom
2017-07-01
Some people with severe mobility impairments are unable to operate powered wheelchairs reliably and effectively, using commercially available interfaces. This has sparked a body of research into "smart wheelchairs", which assist users to drive safely and create opportunities for them to use alternative interfaces. Various "shared control" techniques have been proposed to provide an appropriate level of assistance that is satisfactory and acceptable to the user. Most shared control techniques employ a traditional strategy called linear blending (LB), where the user's commands and wheelchair's autonomous commands are combined in some proportion. In this paper, however, we implement a more generalised form of shared control called probabilistic shared control (PSC). This probabilistic formulation improves the accuracy of modelling the interaction between the user and the wheelchair by taking into account uncertainty in the interaction. In this paper, we demonstrate the practical success of PSC over LB in terms of safety, particularly for novice users.
Bioboxes: standardised containers for interchangeable bioinformatics software.
Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D
2015-01-01
Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.
Adaptive multimodal interaction in mobile augmented reality: A conceptual framework
NASA Astrophysics Data System (ADS)
Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad
2017-10-01
Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.
NASA Technical Reports Server (NTRS)
Bullington, Stanley F.
1992-01-01
The following list of requirements specifies the proposed revisions to the Experiment Scheduling Program (ESP2) which deal with schedule repair. These requirements are divided into those which are general in nature, those which relate to measurement and analysis functions of the software, those which relate specifically to conflict resolution, and those relating directly to the user interface. (This list is not a complete list of requirements for the user interface, but only a list of those schedule repair requirements which relate to the interface.) Some of the requirements relate only to uses of the software in real-time operations. Others are clearly for future versions of the software, beyond the upcoming revision. In either case, the fact will be clearly stated.
On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.
Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando
2017-08-01
Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.
Development of Web Interfaces for Analysis Codes
NASA Astrophysics Data System (ADS)
Emoto, M.; Watanabe, T.; Funaba, H.; Murakami, S.; Nagayama, Y.; Kawahata, K.
Several codes have been developed to analyze plasma physics. However, most of them are developed to run on supercomputers. Therefore, users who typically use personal computers (PCs) find it difficult to use these codes. In order to facilitate the widespread use of these codes, a user-friendly interface is required. The authors propose Web interfaces for these codes. To demonstrate the usefulness of this approach, the authors developed Web interfaces for two analysis codes. One of them is for FIT developed by Murakami. This code is used to analyze the NBI heat deposition, etc. Because it requires electron density profiles, electron temperatures, and ion temperatures as polynomial expressions, those unfamiliar with the experiments find it difficult to use this code, especially visitors from other institutes. The second one is for visualizing the lines of force in the LHD (large helical device) developed by Watanabe. This code is used to analyze the interference caused by the lines of force resulting from the various structures installed in the vacuum vessel of the LHD. This code runs on PCs; however, it requires that the necessary parameters be edited manually. Using these Web interfaces, users can execute these codes interactively.
User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework
Markstrom, Steven L.; Koczot, Kathryn M.
2008-01-01
The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.
Varela, Gervasio; Paz-Lopez, Alejandro; Becerra, Jose A.; Duro, Richard
2016-01-01
This paper addresses the problem of developing user interfaces for Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems. These kind of systems are expected to provide a natural user experience, considering interaction modalities adapted to the user abilities and preferences and using whatever interaction devices are present in the environment. These interaction devices are not necessarily known at design time. The task is quite complicated due to the variety of devices and technologies, and the diversity of scenarios, and it usually burdens the developer with the need to create many different UIs in order to consider the foreseeable user-environment combinations. Here, we propose an UI abstraction framework for UC and AmI systems that effectively improves the portability of those systems between different environments and for different users. It allows developers to design and implement a single UI capable of being deployed with different devices and modalities regardless the physical location. PMID:27399711
Design of a mobile brain computer interface-based smart multimedia controller.
Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh
2015-03-06
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.
Is There a Chance for a Standardised User Interface?
ERIC Educational Resources Information Center
Fletcher, Liz
1993-01-01
Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)
A review method for UML requirements analysis model employing system-side prototyping.
Ogata, Shinpei; Matsuura, Saeko
2013-12-01
User interface prototyping is an effective method for users to validate the requirements defined by analysts at an early stage of a software development. However, a user interface prototype system offers weak support for the analysts to verify the consistency of the specifications about internal aspects of a system such as business logic. As the result, the inconsistency causes a lot of rework costs because the inconsistency often makes the developers impossible to actualize the system based on the specifications. For verifying such consistency, functional prototyping is an effective method for the analysts, but it needs a lot of costs and more detailed specifications. In this paper, we propose a review method so that analysts can verify the consistency among several different kinds of diagrams in UML efficiently by employing system-side prototyping without the detailed model. The system-side prototype system does not have any functions to achieve business logic, but visualizes the results of the integration among the diagrams in UML as Web pages. The usefulness of our proposal was evaluated by applying our proposal into a development of Library Management System (LMS) for a laboratory. This development was conducted by a group. As the result, our proposal was useful for discovering the serious inconsistency caused by the misunderstanding among the members of the group.
Development of a User Interface for a Regression Analysis Software Tool
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.
Comparing Text-based and Graphic User Interfaces for Novice and Expert Users
Chen, Jung-Wei; Zhang, Jiajie
2007-01-01
Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI. PMID:18693811
Comparing Text-based and Graphic User Interfaces for novice and expert users.
Chen, Jung-Wei; Zhang, Jiajie
2007-10-11
Graphic User Interface (GUI) is commonly considered to be superior to Text-based User Interface (TUI). This study compares GUI and TUI in an electronic dental record system. Several usability analysis techniques compared the relative effectiveness of a GUI and a TUI. Expert users and novice users were evaluated in time required and steps needed to complete the task. A within-subject design was used to evaluate if the experience with either interface will affect task performance. The results show that the GUI interface was not better than the TUI for expert users. GUI interface was better for novice users. For novice users there was a learning transfer effect from TUI to GUI. This means a user interface is user-friendly or not depending on the mapping between the user interface and tasks. GUI by itself may or may not be better than TUI.
Future View: Web Navigation based on Learning User's Browsing Strategy
NASA Astrophysics Data System (ADS)
Nagino, Norikatsu; Yamada, Seiji
In this paper, we propose a Future View system that assists user's usual Web browsing. The Future View will prefetch Web pages based on user's browsing strategies and present them to a user in order to assist Web browsing. To learn user's browsing strategy, the Future View uses two types of learning classifier systems: a content-based classifier system for contents change patterns and an action-based classifier system for user's action patterns. The results of learning is applied to crawling by Web robots, and the gathered Web pages are presented to a user through a Web browser interface. We experimentally show effectiveness of navigation using the Future View.
NASA Technical Reports Server (NTRS)
Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua
1995-01-01
Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.
Evaluating and extending user-level fault tolerance in MPI applications
Laguna, Ignacio; Richards, David F.; Gamblin, Todd; ...
2016-01-11
The user-level failure mitigation (ULFM) interface has been proposed to provide fault-tolerant semantics in the Message Passing Interface (MPI). Previous work presented performance evaluations of ULFM; yet questions related to its programability and applicability, especially to non-trivial, bulk synchronous applications, remain unanswered. In this article, we present our experiences on using ULFM in a case study with a large, highly scalable, bulk synchronous molecular dynamics application to shed light on the advantages and difficulties of this interface to program fault-tolerant MPI applications. We found that, although ULFM is suitable for master–worker applications, it provides few benefits for more common bulkmore » synchronous MPI applications. Furthermore, to address these limitations, we introduce a new, simpler fault-tolerant interface for complex, bulk synchronous MPI programs with better applicability and support than ULFM for application-level recovery mechanisms, such as global rollback.« less
A two-class self-paced BCI to control a robot in four directions.
Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro
2011-01-01
In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE
Envisioning Advanced User Interfaces for E-Government Applications: A Case Study
NASA Astrophysics Data System (ADS)
Calvary, Gaëlle; Serna, Audrey; Coutaz, Joëlle; Scapin, Dominique; Pontico, Florence; Winckler, Marco
The increasing use of the Web as a software platform together with the advance of technology has promoted Web applications as a starting point for improving communication between citizens and administration. Currently, several e-government Web portals propose applications for accessing information regarding healthcare, taxation, registration, housing, agriculture, education, and social services, which otherwise may be difficult to obtain. However, the adoption of services provided to citizens depends upon how such applications comply with the users' needs. Unfortunately, building an e-government website doesn't guarantee that all citizens who come to use it can access its contents. These services need to be accessible to all citizens/customers equally to ensure wider reach and subsequent adoption of the e-government services. User disabilities, computer or language illiteracy (e.g., foreign language), flexibility on information access (e.g., user remotely located in rural areas, homeless, mobile users), and ensuring user privacy on sensitive data are some of the barriers that must be taken into account when designing the User Interface (UI) of e-government applications.
Efficient Verification of Holograms Using Mobile Augmented Reality.
Hartl, Andreas Daniel; Arth, Clemens; Grubert, Jens; Schmalstieg, Dieter
2016-07-01
Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users.
Classifying BCI signals from novice users with extreme learning machine
NASA Astrophysics Data System (ADS)
Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.
2017-07-01
Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.
A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers
NASA Astrophysics Data System (ADS)
Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad
Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.
User interface for a tele-operated robotic hand system
Crawford, Anthony L
2015-03-24
Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.
González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel
2016-10-31
In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented.
González, Isaías; Calderón, Antonio José; Mejías, Andrés; Andújar, José Manuel
2016-01-01
In this paper the design and implementation of a network for integrating Programmable Logic Controllers (PLC), the Object-Linking and Embedding for Process Control protocol (OPC) and the open-source Easy Java Simulations (EJS) package is presented. A LabVIEW interface and the Java-Internet-LabVIEW (JIL) server complete the scheme for data exchange. This configuration allows the user to remotely interact with the PLC. Such integration can be considered a novelty in scientific literature for remote control and sensor data acquisition of industrial plants. An experimental application devoted to remote laboratories is developed to demonstrate the feasibility and benefits of the proposed approach. The experiment to be conducted is the parameterization and supervision of a fuzzy controller of a DC servomotor. The graphical user interface has been developed with EJS and the fuzzy control is carried out by our own PLC. In fact, the distinctive features of the proposed novel network application are the integration of the OPC protocol to share information with the PLC and the application under control. The user can perform the tuning of the controller parameters online and observe in real time the effect on the servomotor behavior. The target group is engineering remote users, specifically in control- and automation-related tasks. The proposed architecture system is described and experimental results are presented. PMID:27809229
Remote Adaptive Communication System
2001-10-25
manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device
A Flexible System for Simulating Aeronautical Telecommunication Network
NASA Technical Reports Server (NTRS)
Maly, Kurt; Overstreet, C. M.; Andey, R.
1998-01-01
At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.
Classification of user interfaces for graph-based online analytical processing
NASA Astrophysics Data System (ADS)
Michaelis, James R.
2016-05-01
In the domain of business intelligence, user-oriented software for conducting multidimensional analysis via Online- Analytical Processing (OLAP) is now commonplace. In this setting, datasets commonly have well-defined sets of dimensions and measures around which analysis tasks can be conducted. However, many forms of data used in intelligence operations - deriving from social networks, online communications, and text corpora - will consist of graphs with varying forms of potential dimensional structure. Hence, enabling OLAP over such data collections requires explicit definition and extraction of supporting dimensions and measures. Further, as Graph OLAP remains an emerging technique, limited research has been done on its user interface requirements. Namely, on effective pairing of interface designs to different types of graph-derived dimensions and measures. This paper presents a novel technique for pairing of user interface designs to Graph OLAP datasets, rooted in Analytic Hierarchy Process (AHP) driven comparisons. Attributes of the classification strategy are encoded through an AHP ontology, developed in our alternate work and extended to support pairwise comparison of interfaces. Specifically, according to their ability, as perceived by Subject Matter Experts, to support dimensions and measures corresponding to Graph OLAP dataset attributes. To frame this discussion, a survey is provided both on existing variations of Graph OLAP, as well as existing interface designs previously applied in multidimensional analysis settings. Following this, a review of our AHP ontology is provided, along with a listing of corresponding dataset and interface attributes applicable toward SME recommendation structuring. A walkthrough of AHP-based recommendation encoding via the ontology-based approach is then provided. The paper concludes with a short summary of proposed future directions seen as essential for this research area.
Bed occupancy monitoring: data processing and clinician user interface design.
Pouliot, Melanie; Joshi, Vilas; Goubran, Rafik; Knoefel, Frank
2012-01-01
Unobtrusive and continuous monitoring of patients, especially at their place of residence, is becoming a significant part of the healthcare model. A variety of sensors are being used to monitor different patient conditions. Bed occupancy monitoring provides clinicians a quantitative measure of bed entry/exit patterns and may provide information relating to sleep quality. This paper presents a bed occupancy monitoring system using a bed pressure mat sensor. A clinical trial was performed involving 8 patients to collect bed occupancy data. The trial period for each patient ranged from 5-10 weeks. This data was analyzed using a participatory design methodology incorporating clinician feedback to obtain bed occupancy parameters. The parameters extracted include the number of bed exits per night, the bed exit weekly average (including minimum and maximum), the time of day of a particular exit, and the amount of uninterrupted bed occupancy per night. The design of a clinical user interface plays a significant role in the acceptance of such patient monitoring systems by clinicians. The clinician user interface proposed in this paper was designed to be intuitive, easy to navigate and not cause information overload. An iterative design methodology was used for the interface design. The interface design is extendible to incorporate data from multiple sensors. This allows the interface to be part of a comprehensive remote patient monitoring system.
The use of ambient audio to increase safety and immersion in location-based games
NASA Astrophysics Data System (ADS)
Kurczak, John Jason
The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a visual interface, while players performed signifi- cantly better at the game tasks when using the visual interface. This makes ambient audio a legitimate alternative to visual interfaces for mobile users when safety and immersion are a priority.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization
2013-01-01
user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
NASA Astrophysics Data System (ADS)
Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir
2007-09-01
In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness among error-free flows,(3)long term fairness among errored and error-free flows,(4)graceful degradation for leading flows and graceful compensation for lagging flows.
Syroid, Noah; Liu, David; Albert, Robert; Agutter, James; Egan, Talmage D; Pace, Nathan L; Johnson, Ken B; Dowdle, Michael R; Pulsipher, Daniel; Westenskow, Dwayne R
2012-11-01
Drug administration errors are frequent and are often associated with the misuse of IV infusion pumps. One source of these errors may be the infusion pump's user interface. We used failure modes-and-effects analyses to identify programming errors and to guide the design of a new syringe pump user interface. We designed the new user interface to clearly show the pump's operating state simultaneously in more than 1 monitoring location. We evaluated anesthesia residents in laboratory and simulated environments on programming accuracy and error detection between the new user interface and the user interface of a commercially available infusion pump. With the new user interface, we observed the number of programming errors reduced by 81%, the number of keystrokes per task reduced from 9.2 ± 5.0 to 7.5 ± 5.5 (mean ± SD), the time required per task reduced from 18.1 ± 14.1 seconds to 10.9 ± 9.5 seconds and significantly less perceived workload. Residents detected 38 of 70 (54%) of the events with the new user interface and 37 of 70 (53%) with the existing user interface, despite no experience with the new user interface and extensive experience with the existing interface. The number of programming errors and workload were reduced partly because it took less time and fewer keystrokes to program the pump when using the new user interface. Despite minimal training, residents quickly identified preexisting infusion pump problems with the new user interface. Intuitive and easy-to-program infusion pump interfaces may reduce drug administration errors and infusion pump-related adverse events.
Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah
2015-07-09
Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user's hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces.
Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide
2018-05-01
In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Versatile clinical information system design for emergency departments.
Amouh, Teh; Gemo, Monica; Macq, Benoît; Vanderdonckt, Jean; El Gariani, Abdul Wahed; Reynaert, Marc S; Stamatakis, Lambert; Thys, Frédéric
2005-06-01
Compared to other hospital units, the emergency department presents some distinguishing characteristics of its own. Emergency health-care delivery is a collaborative process involving the contribution of several individuals who accomplish their tasks while working autonomously under pressure and sometimes with limited resources. Effective computerization of the emergency department information system presents a real challenge due to the complexity of the scenario. Current computerized support suffers from several problems, including inadequate data models, clumsy user interfaces, and poor integration with other clinical information systems. To tackle such complexity, we propose an approach combining three points of view, namely the transactions (in and out of the department), the (mono and multi) user interfaces and data management. Unlike current systems, we pay particular attention to the user-friendliness and versatility of our system. This means that intuitive user interfaces have been conceived and specific software modeling methodologies have been applied to provide our system with the flexibility and adaptability necessary for the individual and group coordinated tasks. Our approach has been implemented by prototyping a web-based, multiplatform, multiuser, and versatile clinical information system built upon multitier software architecture, using the Java programming language.
Alppay, Cem; Bayazit, Nigan
2015-11-01
In this paper, we study the arrangement of displays in flight instrument panels of multi-purpose civil helicopters following a user-centered design method based on ergonomics principles. Our methodology can also be described as a user-interface arrangement methodology based on user opinions and preferences. This study can be outlined as gathering user-centered data using two different research methods and then analyzing and integrating the collected data to come up with an optimal instrument panel design. An interview with helicopter pilots formed the first step of our research. In that interview, pilots were asked to provide a quantitative evaluation of basic interface arrangement principles. In the second phase of the research, a paper prototyping study was conducted with same pilots. The final phase of the study entailed synthesizing the findings from interviews and observational studies to formulate an optimal flight instrument arrangement methodology. The primary results that we present in our paper are the methodology that we developed and three new interface arrangement concepts, namely relationship of inseparability, integrated value and locational value. An optimum instrument panel arrangement is also proposed by the researchers. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
User Interface Design in Medical Distributed Web Applications.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
2016-01-01
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Recommendations for a service framework to access astronomical archives
NASA Technical Reports Server (NTRS)
Travisano, J. J.; Pollizzi, J.
1992-01-01
There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit archive user and supplier alike is proposed.
A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.
Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh
2017-07-01
Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.
'Fly Like This': Natural Language Interface for UAV Mission Planning
NASA Technical Reports Server (NTRS)
Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette
2017-01-01
With the increasing presence of unmanned aerial vehicles (UAVs) in everyday environments, the user base of these powerful and potentially intelligent machines is expanding beyond exclusively highly trained vehicle operators to include non-expert system users. Scientists seeking to augment costly and often inflexible methods of data collection historically used are turning towards lower cost and reconfigurable UAVs. These new users require more intuitive and natural methods for UAV mission planning. This paper explores two natural language interfaces - gesture and speech - for UAV flight path generation through individual user studies. Subjects who participated in the user studies also used a mouse-based interface for a baseline comparison. Each interface allowed the user to build flight paths from a library of twelve individual trajectory segments. Individual user studies evaluated performance, efficacy, and ease-of-use of each interface using background surveys, subjective questionnaires, and observations on time and correctness. Analysis indicates that natural language interfaces are promising alternatives to traditional interfaces. The user study data collected on the efficacy and potential of each interface will be used to inform future intuitive UAV interface design for non-expert users.
NASA Astrophysics Data System (ADS)
Jiang, Y.
2015-12-01
Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.
Recommender system based on scarce information mining.
Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang
2017-09-01
Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evolving the Land Information System into a Cloud Computing Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houser, Paul R.
The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues.more » The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.« less
Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E
2016-08-20
As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.
Development of a task analysis tool to facilitate user interface design
NASA Technical Reports Server (NTRS)
Scholtz, Jean C.
1992-01-01
A good user interface is one that facilitates the user in carrying out his task. Such interfaces are difficult and costly to produce. The most important aspect in producing a good interface is the ability to communicate to the software designers what the user's task is. The Task Analysis Tool is a system for cooperative task analysis and specification of the user interface requirements. This tool is intended to serve as a guide to development of initial prototypes for user feedback.
A new JAVA interface implementation of THESIAS: testing haplotype effects in association studies.
Tregouet, D A; Garelle, V
2007-04-15
THESIAS (Testing Haplotype EffectS In Association Studies) is a popular software for carrying haplotype association analysis in unrelated individuals. In addition to the command line interface, a graphical JAVA interface is now proposed allowing one to run THESIAS in a user-friendly manner. Besides, new functionalities have been added to THESIAS including the possibility to analyze polychotomous phenotype and X-linked polymorphisms. The software package including documentation and example data files is freely available at http://genecanvas.ecgene.net. The source codes are also available upon request.
A self-paced motor imagery based brain-computer interface for robotic wheelchair control.
Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng
2011-10-01
This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.
Human motion retrieval from hand-drawn sketch.
Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee
2012-05-01
The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.
Planetary data analysis and display system: A version of PC-McIDAS
NASA Technical Reports Server (NTRS)
Limaye, Sanjay S.; Sromovsky, L. A.; Saunders, R. S.; Martin, Michael
1993-01-01
We propose to develop a system for access and analysis of planetary data from past and future space missions based on an existing system, the PC-McIDAS workstation. This system is now in use in the atmospheric science community for access to meteorological satellite and conventional weather data. The proposed system would be usable not only by planetary atmospheric researchers but also by the planetary geologic community. By providing the critical tools of an efficient system architecture, newer applications and customized user interfaces can be added by the end user within such a system.
Distributed user interfaces for clinical ubiquitous computing applications.
Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik
2005-08-01
Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.
Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier
2017-05-30
Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.
StarTrax --- The Next Generation User Interface
NASA Astrophysics Data System (ADS)
Richmond, Alan; White, Nick
StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.
The application of autostereoscopic display in smart home system based on mobile devices
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Ling, Zhi
2015-03-01
Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.
Pereira, Suzanne; Hassler, Sylvain; Hamek, Saliha; Boog, César; Leroy, Nicolas; Beuscart-Zéphir, Marie-Catherine; Favre, Madeleine; Venot, Alain; Duclos, Catherine; Lamy, Jean-Baptiste
2014-08-26
Clinical practice guidelines are useful for physicians, and guidelines are available on the Internet from various websites such as Vidal Recos. However, these guidelines are long and difficult to read, especially during consultation. Similar difficulties have been encountered with drug summaries of product characteristics. In a previous work, we have proposed an iconic language (called VCM, for Visualization of Concepts in Medicine) for representing patient conditions, treatments and laboratory tests, and we have used these icons to design a user interface that graphically indexes summaries of product characteristics. In the current study, our objective was to design and evaluate an iconic user interface for the consultation of clinical practice guidelines by physicians. Focus groups of physicians were set up to identify the difficulties encountered when reading guidelines. Icons were integrated into Vidal Recos, taking human factors into account. The resulting interface includes a graphical summary and an iconic indexation of the guideline. The new interface was evaluated. We compared the response times and the number of errors recorded when physicians answered questions about two clinical scenarios using the interactive iconic interface or a textual interface. Users' perceived usability was evaluated with the System Usability Scale. The main difficulties encountered by physicians when reading guidelines were obtaining an overview and finding recommendations for patients corresponding to "particular cases". We designed a graphical interface for guideline consultation, using icons to identify particular cases and providing a graphical summary of the icons organized by anatomy and etiology. The evaluation showed that physicians gave clinical responses more rapidly with the iconic interface than the textual interface (25.2 seconds versus 45.6, p < 0.05). The physicians appreciated the new interface, and the System Usability Scale score value was 75 (between good and excellent). An interactive iconic interface can provide physicians with an overview of clinical practice guidelines, and can decrease the time required to access the content of such guidelines.
Guo, Hansong; Huang, He; Huang, Liusheng; Sun, Yu-E
2016-01-01
As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user’s daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy. PMID:27556461
Graphical User Interface Programming in Introductory Computer Science.
ERIC Educational Resources Information Center
Skolnick, Michael M.; Spooner, David L.
Modern computing systems exploit graphical user interfaces for interaction with users; as a result, introductory computer science courses must begin to teach the principles underlying such interfaces. This paper presents an approach to graphical user interface (GUI) implementation that is simple enough for beginning students to understand, yet…
Intentional Voice Command Detection for Trigger-Free Speech Interface
NASA Astrophysics Data System (ADS)
Obuchi, Yasunari; Sumiyoshi, Takashi
In this paper we introduce a new framework of audio processing, which is essential to achieve a trigger-free speech interface for home appliances. If the speech interface works continually in real environments, it must extract occasional voice commands and reject everything else. It is extremely important to reduce the number of false alarms because the number of irrelevant inputs is much larger than the number of voice commands even for heavy users of appliances. The framework, called Intentional Voice Command Detection, is based on voice activity detection, but enhanced by various speech/audio processing techniques such as emotion recognition. The effectiveness of the proposed framework is evaluated using a newly-collected large-scale corpus. The advantages of combining various features were tested and confirmed, and the simple LDA-based classifier demonstrated acceptable performance. The effectiveness of various methods of user adaptation is also discussed.
Online handwritten mathematical expression recognition
NASA Astrophysics Data System (ADS)
Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül
2007-01-01
We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.
A proposed application programming interface for a physical volume repository
NASA Technical Reports Server (NTRS)
Jones, Merritt; Williams, Joel; Wrenn, Richard
1996-01-01
The IEEE Storage System Standards Working Group (SSSWG) has developed the Reference Model for Open Storage Systems Interconnection, Mass Storage System Reference Model Version 5. This document, provides the framework for a series of standards for application and user interfaces to open storage systems. More recently, the SSSWG has been developing Application Programming Interfaces (APIs) for the individual components defined by the model. The API for the Physical Volume Repository is the most fully developed, but work is being done on APIs for the Physical Volume Library and for the Mover also. The SSSWG meets every other month, and meetings are open to all interested parties. The Physical Volume Repository (PVR) is responsible for managing the storage of removable media cartridges and for mounting and dismounting these cartridges onto drives. This document describes a model which defines a Physical Volume Repository, and gives a brief summary of the Application Programming Interface (API) which the IEEE Storage Systems Standards Working Group (SSSWG) is proposing as the standard interface for the PVR.
Gkatzidou, Voula; Hone, Kate; Sutcliffe, Lorna; Gibbs, Jo; Sadiq, Syed Tariq; Szczepura, Ala; Sonnenberg, Pam; Estcourt, Claudia
2015-08-26
The increasing pervasiveness of mobile technologies has given potential to transform healthcare by facilitating clinical management using software applications. These technologies may provide valuable tools in sexual health care and potentially overcome existing practical and cultural barriers to routine testing for sexually transmitted infections. In order to inform the design of a mobile health application for STIs that supports self-testing and self-management by linking diagnosis with online care pathways, we aimed to identify the dimensions and range of preferences for user interface design features among young people. Nine focus group discussions were conducted (n = 49) with two age-stratified samples (16 to 18 and 19 to 24 year olds) of young people from Further Education colleges and Higher Education establishments. Discussions explored young people's views with regard to: the software interface; the presentation of information; and the ordering of interaction steps. Discussions were audio recorded and transcribed verbatim. Interview transcripts were analysed using thematic analysis. Four over-arching themes emerged: privacy and security; credibility; user journey support; and the task-technology-context fit. From these themes, 20 user interface design recommendations for mobile health applications are proposed. For participants, although privacy was a major concern, security was not perceived as a major potential barrier as participants were generally unaware of potential security threats and inherently trusted new technology. Customisation also emerged as a key design preference to increase attractiveness and acceptability. Considerable effort should be focused on designing healthcare applications from the patient's perspective to maximise acceptability. The design recommendations proposed in this paper provide a valuable point of reference for the health design community to inform development of mobile-based health interventions for the diagnosis and treatment of a number of other conditions for this target group, while stimulating conversation across multidisciplinary communities.
Building intuitive 3D interfaces for virtual reality systems
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh
2007-03-01
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.
Virtual Observatory Interfaces to the Chandra Data Archive
NASA Astrophysics Data System (ADS)
Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.
2014-05-01
The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.
CE-SAM: a conversational interface for ISR mission support
NASA Astrophysics Data System (ADS)
Pizzocaro, Diego; Parizas, Christos; Preece, Alun; Braines, Dave; Mott, David; Bakdash, Jonathan Z.
2013-05-01
There is considerable interest in natural language conversational interfaces. These allow for complex user interactions with systems, such as fulfilling information requirements in dynamic environments, without requiring extensive training or a technical background (e.g. in formal query languages or schemas). To leverage the advantages of conversational interactions we propose CE-SAM (Controlled English Sensor Assignment to Missions), a system that guides users through refining and satisfying their information needs in the context of Intelligence, Surveillance, and Reconnaissance (ISR) operations. The rapidly-increasing availability of sensing assets and other information sources poses substantial challenges to effective ISR resource management. In a coalition context, the problem is even more complex, because assets may be "owned" by different partners. We show how CE-SAM allows a user to refine and relate their ISR information needs to pre-existing concepts in an ISR knowledge base, via conversational interaction implemented on a tablet device. The knowledge base is represented using Controlled English (CE) - a form of controlled natural language that is both human-readable and machine processable (i.e. can be used to implement automated reasoning). Users interact with the CE-SAM conversational interface using natural language, which the system converts to CE for feeding-back to the user for confirmation (e.g. to reduce misunderstanding). We show that this process not only allows users to access the assets that can support their mission needs, but also assists them in extending the CE knowledge base with new concepts.
Vali, Faisal; Hong, Robert
2007-10-11
With the evolution of AJAX, ruby on rails, advanced dynamic XHTML technologies and the advent of powerful user interface libraries for javascript (EXT, Yahoo User Interface Library), developers now have the ability to provide truly rich interfaces within web browsers, with reasonable effort and without third-party plugins. We designed and developed an example of such a solution. The User Interface allows radiation oncology practices to intuitively manage different dose fractionation schemes by helping estimate total dose to irradiated organs.
Bitfrost: The One Laptop per Child Security Model
2007-07-01
is only modifi- able by the user through a graphical interface. We plan to use social pressure to convince application de- velopers to distribute...Seymour Papert and Idit Harel. Constructionism . Ablex Publishing Corporation, 1991. [17] C. Partridge. RFC 1363: A proposed flow specification
Evaluation of cardiac signals using discrete wavelet transform with MATLAB graphical user interface.
John, Agnes Aruna; Subramanian, Aruna Priyadharshni; Jaganathan, Saravana Kumar; Sethuraman, Balasubramanian
2015-01-01
To process the electrocardiogram (ECG) signals using MATLAB-based graphical user interface (GUI) and to classify the signals based on heart rate. The subject condition was identified using R-peak detection based on discrete wavelet transform followed by a Bayes classifier that classifies the ECG signals. The GUI was designed to display the ECG signal plot. Obtained from MIT database 18 patients had normal heart rate and 9 patients had abnormal heart rate; 14.81% of the patients suffered from tachycardia and 18.52% of the patients have bradycardia. The proposed GUI display was found useful to analyze the digitized ECG signal by a non-technical user and may help in diagnostics. Further improvement can be done by employing field programmable gate array for the real time processing of cardiac signals. Copyright © 2015 Cardiological Society of India. Published by Elsevier B.V. All rights reserved.
Methods for Improving the User-Computer Interface. Technical Report.
ERIC Educational Resources Information Center
McCann, Patrick H.
This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…
NASA Technical Reports Server (NTRS)
Lewis, Clayton; Wilde, Nick
1989-01-01
Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.
Interface Metaphors for Interactive Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Robert J.; Blaha, Leslie M.
To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less
Stand-alone digital data storage control system including user control interface
NASA Technical Reports Server (NTRS)
Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)
1994-01-01
A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.
Voice and gesture-based 3D multimedia presentation tool
NASA Astrophysics Data System (ADS)
Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro
2007-09-01
This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.
Important ingredients for health adaptive information systems.
Senathirajah, Yalini; Bakken, Suzanne
2011-01-01
Healthcare information systems frequently do not truly meet clinician needs, due to the complexity, variability, and rapid change in medical contexts. Recently the internet world has been transformed by approaches commonly termed 'Web 2.0'. This paper proposes a Web 2.0 model for a healthcare adaptive architecture. The vision includes creating modular, user-composable systems which aim to make all necessary information from multiple internal and external sources available via a platform, for the user to use, arrange, recombine, author, and share at will, using rich interfaces where advisable. Clinicians can create a set of 'widgets' and 'views' which can transform data, reflect their domain knowledge and cater to their needs, using simple drag and drop interfaces without the intervention of programmers. We have built an example system, MedWISE, embodying the user-facing parts of the model. This approach to HIS is expected to have several advantages, including greater suitability to user needs (reflecting clinician rather than programmer concepts and priorities), incorporation of multiple information sources, agile reconfiguration to meet emerging situations and new treatment deployment, capture of user domain expertise and tacit knowledge, efficiencies due to workflow and human-computer interaction improvements, and greater user acceptance.
A Multimodal Adaptive Wireless Control Interface for People With Upper-Body Disabilities.
Fall, Cheikh Latyr; Quevillon, Francis; Blouin, Martine; Latour, Simon; Campeau-Lecours, Alexandre; Gosselin, Clement; Gosselin, Benoit
2018-06-01
This paper describes a multimodal body-machine interface (BoMI) to help individuals with upper-limb disabilities using advanced assistive technologies, such as robotic arms. The proposed system uses a wearable and wireless body sensor network (WBSN) supporting up to six sensor nodes to measure the natural upper-body gesture of the users and translate it into control commands. Natural gesture of the head and upper-body parts, as well as muscular activity, are measured using inertial measurement units (IMUs) and surface electromyography (sEMG) using custom-designed multimodal wireless sensor nodes. An IMU sensing node is attached to a headset worn by the user. It has a size of 2.9 cm 2.9 cm, a maximum power consumption of 31 mW, and provides angular precision of 1. Multimodal patch sensor nodes, including both IMU and sEMG sensing modalities are placed over the user able-body parts to measure the motion and muscular activity. These nodes have a size of 2.5 cm 4.0 cm and a maximum power consumption of 11 mW. The proposed BoMI runs on a Raspberry Pi. It can adapt to several types of users through different control scenarios using the head and shoulder motion, as well as muscular activity, and provides a power autonomy of up to 24 h. JACO, a 6-DoF assistive robotic arm, is used as a testbed to evaluate the performance of the proposed BoMI. Ten able-bodied subjects performed ADLs while operating the AT device, using the Test d'Évaluation des Membres Supérieurs de Personnes Âgées to evaluate and compare the proposed BoMI with the conventional joystick controller. It is shown that the users can perform all tasks with the proposed BoMI, almost as fast as with the joystick controller, with only 30% time overhead on average, while being potentially more accessible to the upper-body disabled who cannot use the conventional joystick controller. Tests show that control performance with the proposed BoMI improved by up to 17% on average, after three trials.
A graphical user interface for infant ERP analysis.
Kaatiala, Jussi; Yrttiaho, Santeri; Forssman, Linda; Perdue, Katherine; Leppänen, Jukka
2014-09-01
Recording of event-related potentials (ERPs) is one of the best-suited technologies for examining brain function in human infants. Yet the existing software packages are not optimized for the unique requirements of analyzing artifact-prone ERP data from infants. We developed a new graphical user interface that enables an efficient implementation of a two-stage approach to the analysis of infant ERPs. In the first stage, video records of infant behavior are synchronized with ERPs at the level of individual trials to reject epochs with noncompliant behavior and other artifacts. In the second stage, the interface calls MATLAB and EEGLAB (Delorme & Makeig, Journal of Neuroscience Methods 134(1):9-21, 2004) functions for further preprocessing of the ERP signal itself (i.e., filtering, artifact removal, interpolation, and rereferencing). Finally, methods are included for data visualization and analysis by using bootstrapped group averages. Analyses of simulated and real EEG data demonstrated that the proposed approach can be effectively used to establish task compliance, remove various types of artifacts, and perform representative visualizations and statistical comparisons of ERPs. The interface is available for download from http://www.uta.fi/med/icl/methods/eeg.html in a format that is widely applicable to ERP studies with special populations and open for further editing by users.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1992-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1993-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparing two anesthesia information management system user interfaces: a usability evaluation.
Wanderer, Jonathan P; Rao, Anoop V; Rothwell, Sarah H; Ehrenfeld, Jesse M
2012-11-01
Anesthesia information management systems (AIMS) have been developed by multiple vendors and are deployed in thousands of operating rooms around the world, yet not much is known about measuring and improving AIMS usability. We developed a methodology for evaluating AIMS usability in a low-fidelity simulated clinical environment and used it to compare an existing user interface with a revised version. We hypothesized that the revised user interface would be more useable. In a low-fidelity simulated clinical environment, twenty anesthesia providers documented essential anesthetic information for the start of the case using both an existing and a revised user interface. Participants had not used the revised user interface previously and completed a brief training exercise prior to the study task. All participants completed a workload assessment and a satisfaction survey. All sessions were recorded. Multiple usability metrics were measured. The primary outcome was documentation accuracy. Secondary outcomes were perceived workload, number of documentation steps, number of user interactions, and documentation time. The interfaces were compared and design problems were identified by analyzing recorded sessions and survey results. Use of the revised user interface was shown to improve documentation accuracy from 85.1% to 92.4%, a difference of 7.3% (95% confidence interval [CI] for the difference 1.8 to 12.7). The revised user interface decreased the number of user interactions by 6.5 for intravenous documentation (95% CI 2.9 to 10.1) and by 16.1 for airway documentation (95% CI 11.1 to 21.1). The revised user interface required 3.8 fewer documentation steps (95% CI 2.3 to 5.4). Airway documentation time was reduced by 30.5 seconds with the revised workflow (95% CI 8.5 to 52.4). There were no significant time differences noted in intravenous documentation or in total task time. No difference in perceived workload was found between the user interfaces. Two user interface design problems were identified in the revised user interface. The usability of anesthesia information management systems can be evaluated using a low-fidelity simulated clinical environment. User testing of the revised user interface showed improvement in some usability metrics and highlighted areas for further revision. Vendors of AIMS and those who use them should consider adopting methods to evaluate and improve AIMS usability.
Collaborative Aerial-Drawing System for Supporting Co-Creative Communication
NASA Astrophysics Data System (ADS)
Osaki, Akihiro; Taniguchi, Hiroyuki; Miwa, Yoshiyuki
This paper describes the collaborative augmented reality (AR) system with which multiple users can handwrite 3D lines in the air simultaneously and manipulate the lines directly in the real world. In addition, we propose a new technique for co-creative communication utilizing the 3D drawing activity. Up to now, the various 3D user interfaces have been proposed. Although most of them aim to solve the specific problems in the virtual environments, the possibility of the 3D drawing expression has not been explored yet. Accordingly, we paid special attention to the interaction with the real objects in daily life, and considered to manipulate real objects and 3D lines without any distinctions by the same action. The developed AR system consists of a stereoscopic head-mounted display, a drawing tool, 6DOF sensors measuring three-dimensional position and Euler angles, and the 3D user interface, which enables to push, grasp and pitch 3D lines directly by use of the drawing tool. Additionally users can pick up desired color from either a landscape or a virtual line through the direct interaction with this tool. For sharing 3D lines among multiple users at the same place, the distributed-type AR system has been developed that mutually sends and receives drawn data between systems. With the developed system, users can proceed to design jointly in the real space through arranging each 3D drawing by direct manipulation. Moreover, a new application to the entertainment has become possible to play sports like catch, fencing match, or the like.
Interoperability through standardization: Electronic mail, and X Window systems
NASA Technical Reports Server (NTRS)
Amin, Ashok T.
1993-01-01
Since the introduction of computing machines, there has been continual advances in computer and communication technologies and approaching limits. The user interface has evolved from a row of switches, character based interface using teletype terminals and then video terminals, to present day graphical user interface. It is expected that next significant advances will come in the availability of services, such as electronic mail and directory services, as the standards for applications are developed and in the 'easy to use' interfaces, such as Graphical User Interface for example Window and X Window, which are being standardized. Various proprietary electronic mail (email) systems are in use within organizations at each center of NASA. Each system provides email services to users within an organization, however the support for email services across organizations and across centers exists at centers to a varying degree and is often easy to use. A recent NASA email initiative is intended 'to provide a simple way to send email across organizational boundaries without disruption of installed base.' The initiative calls for integration of existing organizational email systems through gateways connected by a message switch, supporting X.400 and SMTP protocols, to create a NASA wide email system and for implementation of NASA wide email directory services based on OSI standard X.500. A brief overview of MSFC efforts as a part of this initiative are described. Window based graphical user interfaces make computers easy to use. X window protocol has been developed at Massachusetts Institute of Technology in 1984/1985 to provide uniform window based interface in a distributed computing environment with heterogenous computers. It has since become a standard supported by a number of major manufacturers. Z Windows systems, terminals and workstations, and X Window applications are becoming available. However impact of its use in the Local Area Network environment on the network traffic are not well understood. It is expected that the use of X Windows systems will increase at MSFC especially for Unix based systems. An overview of X Window protocol is presented and its impact on the network traffic is examined. It is proposed that an analytical model of X Window systems in the network environment be developed and validated through the use of measurements to generate application and user profiles.
Starting Over: Current Issues in Online Catalog User Interface Design.
ERIC Educational Resources Information Center
Crawford, Walt
1992-01-01
Discussion of online catalogs focuses on issues in interface design. Issues addressed include understanding the user base; common user access (CUA) with personal computers; common command language (CCL); hyperlinks; screen design issues; differences from card catalogs; indexes; graphic user interfaces (GUIs); color; online help; and remote users.…
Graphical user interface for yield and dose estimations for cyclotron-produced technetium
NASA Astrophysics Data System (ADS)
Hou, X.; Vuckovic, M.; Buckley, K.; Bénard, F.; Schaffer, P.; Ruth, T.; Celler, A.
2014-07-01
The cyclotron-based 100Mo(p,2n)99mTc reaction has been proposed as an alternative method for solving the shortage of 99mTc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with 99mTc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced 99mTc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.
Graphical user interface for yield and dose estimations for cyclotron-produced technetium.
Hou, X; Vuckovic, M; Buckley, K; Bénard, F; Schaffer, P; Ruth, T; Celler, A
2014-07-07
The cyclotron-based (100)Mo(p,2n)(99m)Tc reaction has been proposed as an alternative method for solving the shortage of (99m)Tc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with (99m)Tc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced (99m)Tc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.
Enhanced networks operations using the X Window System
NASA Technical Reports Server (NTRS)
Linares, Irving
1993-01-01
We propose an X Window Graphical User Interface (GUI) which is tailored to the operations of NASA GSFC's Network Control Center (NCC), the NASA Ground Terminal (NGT), the White Sands Ground Terminal (WSGT), and the Second Tracking and Data Relay Satellite System (TDRSS) Ground Terminal (STGT). The proposed GUI can also be easily extended to other Ground Network (GN) Tracking Stations due to its standardized nature.
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Su, Kuo-Wei; Liu, Cheng-Li
2012-06-01
A conventional Nursing Information System (NIS), which supports the role of nurse in some areas, is typically deployed as an immobile system. However, the traditional information system can't response to patients' conditions in real-time, causing delays on the availability of this information. With the advances of information technology, mobile devices are increasingly being used to extend the human mind's limited capacity to recall and process large numbers of relevant variables and to support information management, general administration, and clinical practice. Unfortunately, there have been few studies about the combination of a well-designed small-screen interface with a personal digital assistant (PDA) in clinical nursing. Some researchers found that user interface design is an important factor in determining the usability and potential use of a mobile system. Therefore, this study proposed a systematic approach to the development of a mobile nursing information system (MNIS) based on Mobile Human-Computer Interaction (M-HCI) for use in clinical nursing. The system combines principles of small-screen interface design with user-specified requirements. In addition, the iconic functions were designed with metaphor concept that will help users learn the system more quickly with less working-memory. An experiment involving learnability testing, thinking aloud and a questionnaire investigation was conducted for evaluating the effect of MNIS on PDA. The results show that the proposed MNIS is good on learning and higher satisfaction on symbol investigation, terminology and system information.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J
2009-03-01
Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.
User Interface Technology for Formal Specification Development
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.
How to Develop a User Interface That Your Real Users Will Love
ERIC Educational Resources Information Center
Phillips, Donald
2012-01-01
A "user interface" is the part of an interactive system that bridges the user and the underlying functionality of the system. But people sometimes forget that the best interfaces will provide a platform to optimize the users' interactions so that they support and extend the users' activities in effective, useful, and usable ways. To look at it…
ERIC Educational Resources Information Center
Park, Hyungjoo; Song, Hae-Deok
2015-01-01
Given that a user interface interacts with users, a critical factor to be considered in improving the usability of an e-learning user interface is user-friendliness. Affordances enable users to more easily approach and engage in learning tasks because they strengthen positive, activating emotions. However, most studies on affordances limit…
Computer-Based Tools for Evaluating Graphical User Interfaces
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1997-01-01
The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.
Lin, Yi-Jung; Speedie, Stuart
2003-01-01
User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419
Towards automation of user interface design
NASA Technical Reports Server (NTRS)
Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst
1992-01-01
This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.
Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue
2016-10-15
With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates.
Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue
2016-01-01
With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates. PMID:27754456
Designing the user interface: strategies for effective human-computer interaction
NASA Astrophysics Data System (ADS)
Shneiderman, B.
1998-03-01
In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.
Buzzi, Marina; Leporini, Barbara
2009-07-01
This study aims to improve Wikipedia usability for the blind and promote the application of standards relating to Web accessibility and usability. First, accessibility and usability of Wikipedia home, search result and edit pages are analysed using the JAWS screen reader; next, suggestions for improving interaction are proposed and a new Wikipedia editing interface built. Most of the improvements were obtained using the Accessible Rich Internet Applications (WAI-ARIA) suite, developed by the World Wide Web Consortium (W3C) within the framework of the Web Accessibility Initiative (WAI). Last, a scenario of use compares interaction of blind people with the original and the modified interfaces. Our study highlights that although all contents are accessible via screen reader, usability issues exist due to the user's difficulties when interacting with the interface. The scenario of use shows how building an editing interface with the W3C WAI-ARIA suite eliminates many obstacles that can prevent blind users from actively contributing to Wikipedia. The modified Wikipedia editing page is simpler to use via a screen reader than the original one because ARIA ensures a page overview, rapid navigation, and total control of what is happening in the interface.
1990-11-01
to design and implement an adaptive intelligent interface for a command-and-control-style domain. The primary functionality of the resulting...technical tasks, as follows: 1. Analysis of Current Interface Technologies 2. Dejineation of User Roles 3. Development of User Models 4. Design of Interface...Management Association (FEMA). In the initial version of the prototype, two distin-t user models were designed . One type of user modeled by the system is
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu
2004-05-01
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
Promotion Assistance Tool for Mobile Phone Users
NASA Astrophysics Data System (ADS)
Intraprasert, P.; Jatikul, N.; Chantrapornchai, C.
In this paper, we propose an application tool to help analyze the usage of a mobile phone for a typical user. From the past usage, the tool can analyze the promotion that is suitable for the user which may save the total expense. The application consists of both client and server side. On the server side, the information for each promotion package for a phone operator is stored as well as the usage database for each client. The client side is a user interface for both phone operators and users to enter their information. The analysis engine are based on KNN, ANN, decision tree and Naïve Bayes models. For comparison, it is shown that KNN and decision outperforms the others.
Customization of user interfaces to reduce errors and enhance user acceptance.
Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram
2014-03-01
Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.
Intuitive wireless control of a robotic arm for people living with an upper body disability.
Fall, C L; Turgeon, P; Campeau-Lecours, A; Maheu, V; Boukadoum, M; Roy, S; Massicotte, D; Gosselin, C; Gosselin, B
2015-08-01
Assistive Technologies (ATs) also called extrinsic enablers are useful tools for people living with various disabilities. The key points when designing such useful devices not only concern their intended goal, but also the most suitable human-machine interface (HMI) that should be provided to users. This paper describes the design of a highly intuitive wireless controller for people living with upper body disabilities with a residual or complete control of their neck and their shoulders. Tested with JACO, a six-degree-of-freedom (6-DOF) assistive robotic arm with 3 flexible fingers on its end-effector, the system described in this article is made of low-cost commercial off-the-shelf components and allows a full emulation of JACO's standard controller, a 3 axis joystick with 7 user buttons. To do so, three nine-degree-of-freedom (9-DOF) inertial measurement units (IMUs) are connected to a microcontroller and help measuring the user's head and shoulders position, using a complementary filter approach. The results are then transmitted to a base-station via a 2.4-GHz low-power wireless transceiver and interpreted by the control algorithm running on a PC host. A dedicated software interface allows the user to quickly calibrate the controller, and translates the information into suitable commands for JACO. The proposed controller is thoroughly described, from the electronic design to implemented algorithms and user interfaces. Its performance and future improvements are discussed as well.
Enabling end-user network monitoring via the multicast consolidated proxy monitor
NASA Astrophysics Data System (ADS)
Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew
2001-07-01
The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.
BioSearch: a semantic search engine for Bio2RDF
Qiu, Honglei; Huang, Jiacheng
2017-01-01
Abstract Biomedical data are growing at an incredible pace and require substantial expertise to organize data in a manner that makes them easily findable, accessible, interoperable and reusable. Massive effort has been devoted to using Semantic Web standards and technologies to create a network of Linked Data for the life sciences, among others. However, while these data are accessible through programmatic means, effective user interfaces for non-experts to SPARQL endpoints are few and far between. Contributing to user frustrations is that data are not necessarily described using common vocabularies, thereby making it difficult to aggregate results, especially when distributed across multiple SPARQL endpoints. We propose BioSearch — a semantic search engine that uses ontologies to enhance federated query construction and organize search results. BioSearch also features a simplified query interface that allows users to optionally filter their keywords according to classes, properties and datasets. User evaluation demonstrated that BioSearch is more effective and usable than two state of the art search and browsing solutions. Database URL: http://ws.nju.edu.cn/biosearch/ PMID:29220451
Wu, Zhen-Yu; Tseng, Yi-Ju; Chung, Yufang; Chen, Yee-Chun; Lai, Feipei
2012-08-01
With the rapid development of the Internet, both digitization and electronic orientation are required on various applications in the daily life. For hospital-acquired infection control, a Web-based Hospital-acquired Infection Surveillance System was implemented. Clinical data from different hospitals and systems were collected and analyzed. The hospital-acquired infection screening rules in this system utilized this information to detect different patterns of defined hospital-acquired infection. Moreover, these data were integrated into the user interface of a signal entry point to assist physicians and healthcare providers in making decisions. Based on Service-Oriented Architecture, web-service techniques which were suitable for integrating heterogeneous platforms, protocols, and applications, were used. In summary, this system simplifies the workflow of hospital infection control and improves the healthcare quality. However, it is probable for attackers to intercept the process of data transmission or access to the user interface. To tackle the illegal access and to prevent the information from being stolen during transmission over the insecure Internet, a password-based user authentication scheme is proposed for information integrity.
Hübner, U; Klein, F; Hofstetter, J; Kammeyer, G; Seete, H
2000-01-01
Web-based drug ordering allows a growing number of hospitals without pharmacy to communicate seamlessly with their external pharmacy. Business process analysis and object oriented modelling performed together with the users at a pilot hospital resulted in a comprehensive picture of the user and business requirements for electronic drug ordering. The user requirements were further validated with the help of a software prototype. In order to capture the needs of a large number of users CAP10, a new method making use of pre-built models, is proposed. Solutions for coping with the technical requirements (interfacing the business software at the pharmacy) and with the legal requirements (signing the orders) are presented.
A Model for Intelligent Computer-Aided Education Systems.
ERIC Educational Resources Information Center
Du Plessis, Johan P.; And Others
1995-01-01
Proposes a model for intelligent computer-aided education systems that is based on cooperative learning, constructive problem-solving, object-oriented programming, interactive user interfaces, and expert system techniques. Future research is discussed, and a prototype for teaching mathematics to 10- to 12-year-old students is appended. (LRW)
E-Training: Can Young and Older Users Be Accommodated with the Same Interface?
ERIC Educational Resources Information Center
Rivera-Nivar, Mericia; Pomales-Garcia, Cristina
2010-01-01
This work explores the feasibility of proposing universal design guidelines for E-training modules considering aging differences as an important factor. A controlled experiment was designed and conducted to evaluate the effects of module design characteristics on information recall, satisfaction, disorientation, and task workload, and the…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
... household interviews, and experimental research in laboratory and field settings, both for applied questionnaire evaluation and more basic research on response errors in surveys. The most common evaluation...) Research on 600 1 75/60 (1.25) 750.0 computer-user interface design. Household Interview Volunteers (4...
CARE 3 user-friendly interface user's guide
NASA Technical Reports Server (NTRS)
Martensen, A. L.
1987-01-01
CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
bioLights: light emitting wear for visualizing lower-limb muscle activity.
Igarashi, Naoto; Suzuki, Kenji; Kawamoto, Hiroaki; Sankai, Yoshiyuki
2010-01-01
Analysis of muscle activity by electrophysiological techniques is commonly used to analyze biomechanics. Although the simultaneous and intuitive understanding of both muscle activity and body motion is important in various fields, it is difficult to realize. This paper proposes a novel technique for visualizing physiological signals related to muscle activity by means of surface electromyography. We developed a wearable light-emitting interface that indicates lower-limb muscle activity or muscular tension on the surface of the body in real time by displaying the shape of the activated muscle. The developed interface allows users to perceive muscle activity in an intuitive manner by relating the level of the muscle activity to the brightness level of the glowing interface placed on the corresponding muscle. In order to verify the advantage of the proposed method, a cognitive experiment was conducted to evaluate the system performance. We also conducted an evaluation experiment using the developed interface in conjunction with an exoskeleton robot, in order to investigate the possible applications of the developed interface in the field of neurorehabilitation.
Method and System for Air Traffic Rerouting for Airspace Constraint Resolution
NASA Technical Reports Server (NTRS)
Erzberger, Heinz (Inventor); Morando, Alexander R. (Inventor); Sheth, Kapil S. (Inventor); McNally, B. David (Inventor); Clymer, Alexis A. (Inventor); Shih, Fu-tai (Inventor)
2017-01-01
A dynamic constraint avoidance route system automatically analyzes routes of aircraft flying, or to be flown, in or near constraint regions and attempts to find more time and fuel efficient reroutes around current and predicted constraints. The dynamic constraint avoidance route system continuously analyzes all flight routes and provides reroute advisories that are dynamically updated in real time. The dynamic constraint avoidance route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A Framework to Design and Optimize Chemical Flooding Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
MDANSE: An Interactive Analysis Environment for Molecular Dynamics Simulations.
Goret, G; Aoun, B; Pellegrini, E
2017-01-23
The MDANSE software-Molecular Dynamics Analysis of Neutron Scattering Experiments-is presented. It is an interactive application for postprocessing molecular dynamics (MD) simulations. Given the widespread use of MD simulations in material and biomolecular sciences to get a better insight for experimental techniques such as thermal neutron scattering (TNS), the development of MDANSE has focused on providing a user-friendly, interactive, graphical user interface for analyzing many trajectories in the same session and running several analyses simultaneously independently of the interface. This first version of MDANSE already proposes a broad range of analyses, and the application has been designed to facilitate the introduction of new analyses in the framework. All this makes MDANSE a valuable tool for extracting useful information from trajectories resulting from a wide range of MD codes.
siGnum: graphical user interface for EMG signal analysis.
Kaur, Manvinder; Mathur, Shilpi; Bhatia, Dinesh; Verma, Suresh
2015-01-01
Electromyography (EMG) signals that represent the electrical activity of muscles can be used for various clinical and biomedical applications. These are complicated and highly varying signals that are dependent on anatomical location and physiological properties of the muscles. EMG signals acquired from the muscles require advanced methods for detection, decomposition and processing. This paper proposes a novel Graphical User Interface (GUI) siGnum developed in MATLAB that will apply efficient and effective techniques on processing of the raw EMG signals and decompose it in a simpler manner. It could be used independent of MATLAB software by employing a deploy tool. This would enable researcher's to gain good understanding of EMG signal and its analysis procedures that can be utilized for more powerful, flexible and efficient applications in near future.
A design of hardware haptic interface for gastrointestinal endoscopy simulation.
Gu, Yunjin; Lee, Doo Yong
2011-01-01
Gastrointestinal endoscopy simulations have been developed to train endoscopic procedures which require hundreds of practices to be competent in the skills. Even though realistic haptic feedback is important to provide realistic sensation to the user, most of previous simulations including commercialized simulation have mainly focused on providing realistic visual feedback. In this paper, we propose a novel design of portable haptic interface, which provides 2DOF force feedback, for the gastrointestinal endoscopy simulation. The haptic interface consists of translational and rotational force feedback mechanism which are completely decoupled, and gripping mechanism for controlling connection between the endoscope and the force feedback mechanism.
NASA Astrophysics Data System (ADS)
Setscheny, Stephan
The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.
User interface design principles for the SSM/PMAD automated power system
NASA Technical Reports Server (NTRS)
Jakstas, Laura M.; Myers, Chris J.
1991-01-01
Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.
Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V; Guenther, Frank H
2012-01-01
In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50-60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis.
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
Developing a Graphical User Interface for the ALSS Crop Planning Tool
NASA Technical Reports Server (NTRS)
Koehlert, Erik
1997-01-01
The goal of my project was to create a graphical user interface for a prototype crop scheduler. The crop scheduler was developed by Dr. Jorge Leon and Laura Whitaker for the ALSS (Advanced Life Support System) program. The addition of a system-independent graphical user interface to the crop planning tool will make the application more accessible to a wider range of users and enhance its value as an analysis, design, and planning tool. My presentation will demonstrate the form and functionality of this interface. This graphical user interface allows users to edit system parameters stored in the file system. Data on the interaction of the crew, crops, and waste processing system with the available system resources is organized and labeled. Program output, which is stored in the file system, is also presented to the user in performance-time plots and organized charts. The menu system is designed to guide the user through analysis and decision making tasks, providing some help if necessary. The Java programming language was used to develop this interface in hopes of providing portability and remote operation.
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-01
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-08
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Overview of Graphical User Interfaces.
ERIC Educational Resources Information Center
Hulser, Richard P.
1993-01-01
Discussion of graphical user interfaces for online public access catalogs (OPACs) covers the history of OPACs; OPAC front-end design, including examples from Indiana University and the University of Illinois; and planning and implementation of a user interface. (10 references) (EA)
NASA Astrophysics Data System (ADS)
Beach, A. L., III; Early, A. B.; Chen, G.; Parker, L.
2014-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, which are characterized by a wide range of trace gases and aerosol properties. The airborne observational data have often been used in assessment and validation of models and satellite instruments. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. Given the sheer volume of data variables across field campaigns and instruments reporting data on different time scales, this data is often difficult and time-intensive for researchers to analyze. The TAD web application is designed to provide an intuitive user interface (UI) to facilitate quick and efficient discovery from a vast number of airborne variables and data. Users are given the option to search based on high-level parameter groups, individual common names, mission and platform, as well as date ranges. Experienced users can immediately filter by keyword using the global search option. Once the user has chosen their required variables, they are given the option to either request PI data files based on their search criteria or create merged data, i.e. geo-located data from one or more measurement PIs. The purpose of the merged data feature is to allow users to compare data from one flight, as not all data from each flight is taken on the same time scale. Time bases can be continuous or based on the time base from one of the measurement time scales and intervals. After an order is submitted and processed, an ASDC email is sent to the user with a link for data download. The TAD user interface design, application architecture, and proposed future enhancements will be presented.
Developing A Web-based User Interface for Semantic Information Retrieval
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Keller, Richard M.
2003-01-01
While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.
CLIPS application user interface for the PC
NASA Technical Reports Server (NTRS)
Jenkins, Jim; Holbrook, Rebecca; Shewhart, Mark; Crouse, Joey; Yarost, Stuart
1991-01-01
The majority of applications that utilize expert system development programs for their knowledge representation and inferencing capability require some form of interface with the end user. This interface is more than likely an interaction through the computer screen. When building an application the user interface can prove to be the most difficult and time consuming aspect to program. Commercial products currently exist which address this issue. To keep pace C Language Integrated Production System (CLIPS) will need to find a solution for their lack of an easy to use Application User Interface (AUI). This paper represents a survey of the DoD CLIPS' user community and provides the backbone of a possible solution.
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1990-01-01
The Transportable Applications Environment Plus (TAE PLUS), developed at NASA's Goddard Space Flight Center, is a portable What You See Is What You Get (WYSIWYG) user interface development and management system. Its primary objective is to provide an integrated software environment that allows interactive prototyping and development that of user interfaces, as well as management of the user interface within the operational domain. Although TAE Plus is applicable to many types of applications, its focus is supporting user interfaces for space applications. This paper discusses what TAE Plus provides and how the implementation has utilized state-of-the-art technologies within graphic workstations, windowing systems and object-oriented programming languages.
An interactive medical image segmentation framework using iterative refinement.
Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay
2017-04-01
Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
Human/Computer Interfacing in Educational Environments.
ERIC Educational Resources Information Center
Sarti, Luigi
1992-01-01
This discussion of educational applications of user interfaces covers the benefits of adopting database techniques in organizing multimedia materials; the evolution of user interface technology, including teletype interfaces, analogic overlay graphics, window interfaces, and adaptive systems; application design problems, including the…
Developing the Multimedia User Interface Component (MUSIC) for the Icarus Presentation System (IPS)
1993-12-01
AD-A276 341 In-House Report December 1993 DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) FOR THE ICARUS PRESENTATION SYSTEM (IPS) Ingrid...DATEs COVERED 7 December 1993 Ina-House Jun - Aug 93 4 TWLE AM SL1sM1E & FUNDING NUMBERS DEVELOPING THE MULTIMEDIA USER INTERFACE COMPONENT ( MUSIC ) PE...the Multimedia User Interface Component ( MUSIC ). This report documents the initial research, design and implementation of a prototype of the MUSIC
Development and evaluation of nursing user interface screens using multiple methods.
Hyun, Sookyung; Johnson, Stephen B; Stetson, Peter D; Bakken, Suzanne
2009-12-01
Building upon the foundation of the Structured Narrative Electronic Health Record (EHR) model, we applied theory-based (combined Technology Acceptance Model and Task-Technology Fit Model) and user-centered methods to explore nurses' perceptions of functional requirements for an electronic nursing documentation system, design user interface screens reflective of the nurses' perspectives, and assess nurses' perceptions of the usability of the prototype user interface screens. The methods resulted in user interface screens that were perceived to be easy to use, potentially useful, and well-matched to nursing documentation tasks associated with Nursing Admission Assessment, Blood Administration, and Nursing Discharge Summary. The methods applied in this research may serve as a guide for others wishing to implement user-centered processes to develop or extend EHR systems. In addition, some of the insights obtained in this study may be informative to the development of safe and efficient user interface screens for nursing document templates in EHRs.
Cooperative processing user interfaces for AdaNET
NASA Technical Reports Server (NTRS)
Gutzmann, Kurt M.
1991-01-01
A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.
Improvement of design of a surgical interface using an eye tracking device
2014-01-01
Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176
Improvement of design of a surgical interface using an eye tracking device.
Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz
2014-05-07
Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.
Remapping residual coordination for controlling assistive devices and recovering motor functions.
Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias B; Mussa-Ivaldi, Ferdinando A; Casadio, Maura
2015-12-01
The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human-machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user's residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Eye Tracking Based Control System for Natural Human-Computer Interaction
Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528
Eye Tracking Based Control System for Natural Human-Computer Interaction.
Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
New Exposure Time Calculator for NICMOS (imaging): Features, Testing and Recommendations
NASA Astrophysics Data System (ADS)
Arribas, S.; McLean, D.; Busko, I.; Sosey, M.
2004-02-01
A new NICMOS ETC for imaging mode has been developed as part of the Astronomer’s Proposal Toolkit (APT) project. This new tool fully updates the NICMOS performance for Cycles 11+, expands the funtionality of the previous ETC, providing the user more options, and homogenizes the non-instrument specific parameters (i.e. sky background, extinction laws) with other HST-ETCs. This report summarizes its main characteristics, and gives some recommendations to potential users. Details about the tool itself can be found in the documentation linked to the ETC user interface, which can be accessed from the NICMOS web site at STScI.
Interruption as a test of the user-computer interface
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.; Mccarthy, M. E.
1981-01-01
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
Multi-modal virtual environment research at Armstrong Laboratory
NASA Technical Reports Server (NTRS)
Eggleston, Robert G.
1995-01-01
One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.
Interactive Design and the Mythical "Intuitive User Interface."
ERIC Educational Resources Information Center
Bielenberg, Daniel R.
1993-01-01
Discusses the design of graphical user interfaces. Highlights include conceptual models, including user needs, content, and what multimedia can do; and tools for building the users' mental models, including metaphor, natural mappings, prompts, feedback, and user testing. (LRW)
User Interface Design for Dynamic Geometry Software
ERIC Educational Resources Information Center
Kortenkamp, Ulrich; Dohrmann, Christian
2010-01-01
In this article we describe long-standing user interface issues with Dynamic Geometry Software and common approaches to address them. We describe first prototypes of multi-touch-capable DGS. We also give some hints on the educational benefits of proper user interface design.
The development of an intelligent user interface for NASA's scientific databases
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.
1986-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI effort is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. This paper presents the design concepts, development approach and evaluation of performance of a prototype Intelligent User Interface Subsystem (IUIS) supporting an operational database.
NASA Astrophysics Data System (ADS)
Siarto, J.
2014-12-01
As more Earth science software tools and services move to the web--the design and usability of those tools become ever more important. A good user interface is becoming expected and users are becoming increasingly intolerant of websites and web applications that work against them. The Earthdata UI Pattern Library attempts to give these scientists and developers the design tools they need to make usable, compelling user interfaces without the associated overhead of using a full design team. Patterns are tested and functional user interface elements targeted specifically at the Earth science community and will include web layouts, buttons, tables, typography, iconography, mapping and visualization/graphing widgets. These UI elements have emerged as the result of extensive user testing, research and software development within the NASA Earthdata team over the past year.
MOO in Your Face: Researching, Designing, and Programming a User-Friendly Interface.
ERIC Educational Resources Information Center
Haas, Mark; Gardner, Clinton
1999-01-01
Suggests the learning curve of a multi-user, object-oriented domain (MOO) blockades effective use. Discusses use of an IBM/PC-compatible interface that allows developers to modify the interface to provide a sense of presence for the user. Concludes that work in programming a variety of interfaces has led to a more intuitive environment for…
An Empathic Avatar in a Computer-Aided Learning Program to Encourage and Persuade Learners
ERIC Educational Resources Information Center
Chen, Gwo-Dong; Lee, Jih-Hsien; Wang, Chin-Yeh; Chao, Po-Yao; Li, Liang-Yi; Lee, Tzung-Yi
2012-01-01
Animated pedagogical agents with characteristics such as facial expressions, gestures, and human emotions, under an interactive user interface are attractive to students and have high potential to promote students' learning. This study proposes a convenient method to add an embodied empathic avatar into a computer-aided learning program; learners…
Proposing a Mathematical Software Tool in Physics Secondary Education
ERIC Educational Resources Information Center
Baltzis, Konstantinos B.
2009-01-01
MathCad® is a very popular software tool for mathematical and statistical analysis in science and engineering. Its low cost, ease of use, extensive function library, and worksheet-like user interface distinguish it among other commercial packages. Its features are also well suited to educational process. The use of natural mathematical notation…
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-04-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-03-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Home telemonitoring of vital signs--technical challenges and future directions.
Celler, Branko G; Sparks, Ross S
2015-01-01
The telemonitoring of vital signs from the home is an essential element of telehealth services for the management of patients with chronic conditions, such as congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD), diabetes, or poorly controlled hypertension. Telehealth is now being deployed widely in both rural and urban settings, and in this paper, we discuss the contribution made by biomedical instrumentation, user interfaces, and automated risk stratification algorithms in developing a clinical diagnostic quality longitudinal health record at home. We identify technical challenges in the acquisition of high-quality biometric signals from unsupervised patients at home, identify new technical solutions and user interfaces, and propose new measurement modalities and signal processing techniques for increasing the quality and value of vital signs monitoring at home. We also discuss use of vital signs data for the automated risk stratification of patients, so that clinical resources can be targeted to those most at risk of unscheduled admission to hospital. New research is also proposed to integrate primary care, hospital, personal genomic, and telehealth electronic health records, and apply predictive analytics and data mining for enhancing clinical decision support.
User centered integration of Internet of Things devices
NASA Astrophysics Data System (ADS)
Manione, Roberto
2017-06-01
This paper discusses an IoT framework which allows rapid and easy setup and customization of end-to-end solutions for field data collection and presentation; it is effective in the development of both informative and transactional applications for a wide range of application fields, such as home, industry and environment. On the "far-end" of the chain are the IoT devices gathering the signals; they are developed used a full Model Based approach, where programming is not required: the TaskScript technology is used to this purpose, which supports a choice of physical boards and boxes equipped with a range of Input and Output interfaces, and with a Tcp/Ip interface. The development of the needed specific IoT devices takes advantage of the available "standard" hardware; the software development of the algorithms for sampling, conditioning and uploading signals to the Cloud is supported by a graphical-only IDE. On the "near-end" of the chain is the presentation Interface, through which users can browse through the information provided by their IoT devices; it is implemented in a Conversational way, using the Bot paradigm: Bots are conversational automatons, to whom users can "chat". They are accessed via mainstream Messenger programs, such as Telegram(C), Skype(C) or others, available on smartphones, tablets or desktops; unlike apps, bots do not need installation on the user device. A message Broker has been implemented, to mediate among the far-end and the near-end of the chain, providing the needed services; its behavior is driven by a set of rules provided on a per-device basis, at configuration level; the Broker is able to store messages received from the devices, process and forward them to the specified recipient(s) according to the provided rules; finally, finally is it is able to send transactional commands, from users back to the requested device, to implement not only field observation but also field control. IoT solutions implemented with the proposed solution are user friendly: users can literally "chat with their devices", asking for information, providing commands, and receiving alert notifications, all with their favorite (mobile) terminal. To demonstrate de effectiveness of the proposed scenario, several solutions have been set up for industrial applications; such "mobile dashboards" are presently used by managers and technicians to keep track of their machines and plants.
Command and control interfaces for advanced neuroprosthetic applications.
Scott, T R; Haugland, M
2001-10-01
Command and control interfaces permit the intention and situation of the user to influence the operation of the neural prosthesis. The wishes of the user are communicated via command interfaces to the neural prosthesis and the situation of the user by feedback control interfaces. Both these interfaces have been reviewed separately and are discussed in light of the current state of the art and projections for the future. It is apparent that as system functional complexity increases, the need for simpler command interfaces will increase. Such systems will demand more information to function effectively in order not to unreasonably increase user attention overhead. This will increase the need for bioelectric and biomechanical signals in a comprehensible form via elegant feedback control interfaces. Implementing such systems will also increase the computational demand on such neural prostheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, B.P.
This report presents a historical perspective of the difficulties associated with user interface design and a review of interface design techniques. Included in the report is an application using rapid-interface-prototyping to the development of CAMP's user interface. 24 refs., 2 tabs.
A Question of Interface Design: How Do Online Service GUIs Measure Up?
ERIC Educational Resources Information Center
Head, Alison J.
1997-01-01
Describes recent improvements in graphical user interfaces (GUIs) offered by online services. Highlights include design considerations, including computer engineering capabilities and users' abilities; fundamental GUI design principles; user empowerment; visual communication and interaction; and an evaluation of online search interfaces. (LRW)
The GUI OPAC: Approach with Caution.
ERIC Educational Resources Information Center
Hildreth, Charles R.
1995-01-01
Discusses the graphical user interface (GUI) online public access catalog (OPAC), a user interface that uses images to represent options. Topics include user interface design for information retrieval; designing effective bibliographic displays, including subject headings; two design principles; and what GUIs can bring to OPACs. (LRW)
The Graphical User Interface: Crisis, Danger, and Opportunity.
ERIC Educational Resources Information Center
Boyd, L. H.; And Others
1990-01-01
This article describes differences between the graphical user interface and traditional character-based interface systems, identifies potential problems posed by graphic computing environments for blind computer users, and describes some programs and strategies that are being developed to provide access to those environments. (Author/JDD)
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-06-01
Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
Eye-gaze and intent: Application in 3D interface control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, J.C.; Goldberg, J.H.
1993-01-01
Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less
A brain-computer interface controlled mail client.
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Wang, Cong
2013-01-01
In this paper, we propose a brain-computer interface (BCI) based mail client. This system is controlled by hybrid features extracted from scalp-recorded electroencephalographic (EEG). We emulate the computer mouse by the motor imagery-based mu rhythm and the P300 potential. Furthermore, an adaptive P300 speller is included to provide text input function. With this BCI mail client, users can receive, read, write mails, as well as attach files in mail writing. The system has been tested on 3 subjects. Experimental results show that mail communication with this system is feasible.
Ramos, S Raquel
2017-11-01
Health information exchange is the electronic accessibility and transferability of patient medical records across various healthcare settings and providers. In some states, patients have to formally give consent to allow their medical records to be electronically shared. The purpose of this study was to apply a novel user-centered, multistep, multiframework approach to design and test an electronic consent user interface, so patients with HIV can make more informed decisions about electronically sharing their health information. This study consisted of two steps. Step 1 was a cross-sectional, descriptive, qualitative study that used user-centric design interviews to create the user interface. This informed Step 2. Step 2 consisted of a one group posttest to examine perceptions of usefulness, ease of use, preference, and comprehension of a health information exchange electronic consent user interface. More than half of the study population had college experience, but challenges remained with overall comprehension regarding consent. The user interface was not independently successful, suggesting that in addition to an electronic consent user interface, human interaction may also be necessary to address the complexities associated with consenting to electronically share health information. Comprehension is key factor in the ability to make informed decisions.
Admission and Preventive Load Control for Delivery of Multicast and Broadcast Services via S-UMTS
NASA Astrophysics Data System (ADS)
Angelou, E.; Koutsokeras, N.; Andrikopoulos, I.; Mertzanis, I.; Karaliopoulos, M.; Henrio, P.
2003-07-01
An Admission Control strategy is proposed for unidirectional satellite systems delivering multicast and broadcast services to mobile users. In such systems, both the radio interface and the targeted services impose particular requirements on the RRM task. We briefly discuss the RRM requirements that stem from the services point of view and from the features of the SATIN access scheme that differentiate it from the conventional T-UMTS radio interface. The main functional entities of RRM and the alternative modes of operation are outlined and the proposed Admission Control algorithm is described in detail. The results from the simulation study that demonstrate its performance for a number of different scenarios are finally presented and conclusions derived.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
A design space of visualization tasks.
Schulz, Hans-Jörg; Nocke, Thomas; Heitzler, Magnus; Schumann, Heidrun
2013-12-01
Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.
Koutelakis, George V.; Anastassopoulos, George K.; Lymberopoulos, Dimitrios K.
2012-01-01
Multiprotocol medical imaging communication through the Internet is more flexible than the tight DICOM transfers. This paper introduces a modular multiprotocol teleradiology architecture that integrates DICOM and common Internet services (based on web, FTP, and E-mail) into a unique operational domain. The extended WADO service (a web extension of DICOM) and the other proposed services allow access to all levels of the DICOM information hierarchy as opposed to solely Object level. A lightweight client site is considered adequate, because the server site of the architecture provides clients with service interfaces through the web as well as invulnerable space for temporary storage, called as User Domains, so that users fulfill their applications' tasks. The proposed teleradiology architecture is pilot implemented using mainly Java-based technologies and is evaluated by engineers in collaboration with doctors. The new architecture ensures flexibility in access, user mobility, and enhanced data security. PMID:22489237
Distributed On-line Monitoring System Based on Modem and Public Phone Net
NASA Astrophysics Data System (ADS)
Chen, Dandan; Zhang, Qiushi; Li, Guiru
In order to solve the monitoring problem of urban sewage disposal, a distributed on-line monitoring system is proposed. By introducing dial-up communication technology based on Modem, the serial communication program can rationally solve the information transmission problem between master station and slave station. The realization of serial communication program is based on the MSComm control of C++ Builder 6.0.The software includes real-time data operation part and history data handling part, which using Microsoft SQL Server 2000 for database, and C++ Builder6.0 for user interface. The monitoring center displays a user interface with alarm information of over-standard data and real-time curve. Practical application shows that the system has successfully accomplished the real-time data acquisition from data gather station, and stored them in the terminal database.
Haptic interface of web-based training system for interventional radiology procedures
NASA Astrophysics Data System (ADS)
Ma, Xin; Lu, Yiping; Loe, KiaFock; Nowinski, Wieslaw L.
2004-05-01
The existing web-based medical training systems and surgical simulators can provide affordable and accessible medical training curriculum, but they seldom offer the trainee realistic and affordable haptic feedback. Therefore, they cannot offer the trainee a suitable practicing environment. In this paper, a haptic solution for interventional radiology (IR) procedures is proposed. System architecture of a web-based training system for IR procedures is briefly presented first. Then, the mechanical structure, the working principle and the application of a haptic device are discussed in detail. The haptic device works as an interface between the training environment and the trainees and is placed at the end user side. With the system, the user can be trained on the interventional radiology procedures - navigating catheters, inflating balloons, deploying coils and placing stents on the web and get surgical haptic feedback in real time.
A Prototype Graphical User Interface for Co-op: A Group Decision Support System.
1992-03-01
achieve their potential to communicate. Information-oriented, systematic graphic design is the use of typography , symbols, color, and other static and...apphcuittin by reducig Uber ellurt anid enhuncizig Iliteracti. ’Iliis thesis designs and de% elupht Itrututylle Graphical User Interface iGUl i fui Cu f...ORGANIZATION.... .. .. ............ II. INTERFACE DESIGN PRINCIPLES. .............. 7 A. GRAPHICAL USER INTERFACES.............7 1. Design Principles
1993-11-01
way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory
BrainIACS: a system for web-based medical image processing
NASA Astrophysics Data System (ADS)
Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.
2009-02-01
We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.
KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.
Gestures in an Intelligent User Interface
NASA Astrophysics Data System (ADS)
Fikkert, Wim; van der Vet, Paul; Nijholt, Anton
In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.
NASA Astrophysics Data System (ADS)
Inoue, Y.; Tsuruoka, K.; Arikawa, M.
2014-04-01
In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.
Embedded Control System for Smart Walking Assistance Device.
Bosnak, Matevz; Skrjanc, Igor
2017-03-01
This paper presents the design and implementation of a unique control system for a smart hoist, a therapeutic device that is used in rehabilitation of walking. The control system features a unique human-machine interface that allows the human to intuitively control the system just by moving or rotating its body. The paper contains an overview of the complete system, including the design and implementation of custom sensors, dc servo motor controllers, communication interfaces and embedded-system based central control system. The prototype of the complete system was tested by conducting a 6-runs experiment on 11 subjects and results are showing that the proposed control system interface is indeed intuitive and simple to adopt by the user.
Human-computer interface including haptically controlled interactions
Anderson, Thomas G.
2005-10-11
The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.
Applying Cognitive Psychology to User Interfaces
NASA Astrophysics Data System (ADS)
Durrani, Sabeen; Durrani, Qaiser S.
This paper explores some key aspects of cognitive psychology that may be mapped onto user interfaces. Major focus in existing user interface guidelines is on consistency, simplicity, feedback, system messages, display issues, navigation, colors, graphics, visibility and error prevention [8-10]. These guidelines are effective indesigning user interfaces. However, these guidelines do not handle the issues that may arise due to the innate structure of human brain and human limitations. For example, where to place graphics on the screen so that user can easily process them and what kind of background should be given on the screen according to the limitation of human motor system. In this paper we have collected some available guidelines from the area of cognitive psychology [1, 5, 7]. In addition, we have extracted few guidelines from theories and studies of cognitive psychology [3, 11] which may be mapped to user interfaces.
Bimanual Interaction with Interscopic Multi-Touch Surfaces
NASA Astrophysics Data System (ADS)
Schöning, Johannes; Steinicke, Frank; Krüger, Antonio; Hinrichs, Klaus; Valkov, Dimitar
Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.
Ma, Meng; Fallavollita, Pascal; Habert, Séverine; Weidert, Simon; Navab, Nassir
2016-06-01
In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.
User interface using a 3D model for video surveillance
NASA Astrophysics Data System (ADS)
Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru
1998-02-01
These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.
The Distributed Common Ground System-Army User Interface
2015-06-12
its perceived lack of effectiveness. Popular opinion of the DCGS-A user interface within the military is it is unfriendly to use and not intuitive...from members of the United States Congress due to its perceived lack of effectiveness. Popular opinion of the DCGS-A user interface within the
Learning Analytics for Natural User Interfaces
ERIC Educational Resources Information Center
Martinez-Maldonado, Roberto; Shum, Simon Buckingham; Schneider, Bertrand; Charleer, Sven; Klerkx, Joris; Duval, Erik
2017-01-01
The continuous advancement of natural user interfaces (NUIs) allows for the development\tof novel and creative ways to support collocated collaborative work in a wide range of areas, including teaching and learning. The use of NUIs, such as those based on interactive multi-touch surfaces and tangible user interfaces (TUIs), can offer unique…
NASA Technical Reports Server (NTRS)
McNally, B. David (Inventor); Erzberger, Heinz (Inventor); Sheth, Kapil (Inventor)
2015-01-01
A dynamic weather route system automatically analyzes routes for in-flight aircraft flying in convective weather regions and attempts to find more time and fuel efficient reroutes around current and predicted weather cells. The dynamic weather route system continuously analyzes all flights and provides reroute advisories that are dynamically updated in real time while the aircraft are in flight. The dynamic weather route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.
Development of a Mobile User Interface for Image-based Dietary Assessment.
Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J
2010-12-31
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.
Spatial issues in user interface design from a graphic design perspective
NASA Technical Reports Server (NTRS)
Marcus, Aaron
1989-01-01
The user interface of a computer system is a visual display that provides information about the status of operations on data within the computer and control options to the user that enable adjustments to these operations. From the very beginning of computer technology the user interface was a spatial display, although its spatial features were not necessarily complex or explicitly recognized by the users. All text and nonverbal signs appeared in a virtual space generally thought of as a single flat plane of symbols. Current technology of high performance workstations permits any element of the display to appear as dynamic, multicolor, 3-D signs in a virtual 3-D space. The complexity of appearance and the user's interaction with the display provide significant challenges to the graphic designer of current and future user interfaces. In particular, spatial depiction provides many opportunities for effective communication of objects, structures, processes, navigation, selection, and manipulation. Issues are presented that are relevant to the graphic designer seeking to optimize the user interface's spatial attributes for effective visual communication.
Quantifying the role of motor imagery in brain-machine interfaces
NASA Astrophysics Data System (ADS)
Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf
2016-04-01
Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity).
Consensus-based methodology for detection communities in multilayered networks
NASA Astrophysics Data System (ADS)
Karimi-Majd, Amir-Mohsen; Fathian, Mohammad; Makrehchi, Masoud
2018-03-01
Finding groups of network users who are densely related with each other has emerged as an interesting problem in the area of social network analysis. These groups or so-called communities would be hidden behind the behavior of users. Most studies assume that such behavior could be understood by focusing on user interfaces, their behavioral attributes or a combination of these network layers (i.e., interfaces with their attributes). They also assume that all network layers refer to the same behavior. However, in real-life networks, users' behavior in one layer may differ from their behavior in another one. In order to cope with these issues, this article proposes a consensus-based community detection approach (CBC). CBC finds communities among nodes at each layer, in parallel. Then, the results of layers should be aggregated using a consensus clustering method. This means that different behavior could be detected and used in the analysis. As for other significant advantages, the methodology would be able to handle missing values. Three experiments on real-life and computer-generated datasets have been conducted in order to evaluate the performance of CBC. The results indicate superiority and stability of CBC in comparison to other approaches.
Quantifying the role of motor imagery in brain-machine interfaces
Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf
2016-01-01
Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity). PMID:27052520
Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.
Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti
2016-10-04
Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence.
An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger
Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico
2016-01-01
In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088
Riccio, Angela; Holz, Elisa Mira; Aricò, Pietro; Leotta, Francesco; Aloise, Fabio; Desideri, Lorenzo; Rimondini, Matteo; Kübler, Andrea; Mattia, Donatella; Cincotti, Febo
2015-03-01
To evaluate the impact of a hybrid control on usability of a P300-based brain-computer interface (BCI) system that was designed to control an assistive technology software and was integrated with an electromyographic channel for error correction. Proof-of-principle study with a convenience sample. Neurologic rehabilitation hospital. Participants (N=11) in this pilot study included healthy (n=8) and severely motor impaired (n=3) persons. The 3 people with severe motor disability were identified as potential candidates to benefit from the proposed hybrid BCI system for communication and environmental interaction. To eventually investigate the improvement in usability, we compared 2 modalities of BCI system control: a P300-based and a hybrid P300 electromyographic-based mode of control. System usability was evaluated according to the following outcome measures within 3 domains: (1) effectiveness (overall system accuracy and P300-based BCI accuracy); (2) efficiency (throughput time and users' workload); and (3) satisfaction (users' satisfaction). We also considered the information transfer rate and time for selection. Findings obtained in healthy participants were in favor of a higher usability of the hybrid control as compared with the nonhybrid. A similar trend was indicated by the observational results gathered from each of the 3 potential end-users. The proposed hybrid BCI control modality could provide end-users with severe motor disability with an option to exploit some residual muscular activity, which could not be fully reliable for properly controlling an assistive technology device. The findings reported in this pilot study encourage the implementation of a clinical trial involving a large cohort of end-users. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
A simple and reliable health monitoring system for shoulder health: proposal.
Liu, Shuo-Fang; Lee, Yann-Long
2014-02-26
The current health care system is complex and inefficient. A simple and reliable health monitoring system that can help patients perform medical self-diagnosis is seldom readily available. Because the medical system is vast and complex, it has hampered or delayed patients in seeking medical advice or treatment in a timely manner, which may potentially affect the patient's chances of recovery, especially those with severe sicknesses such as cancer, and heart disease. The purpose of this paper is to propose a methodology in designing a simple, low cost, Internet-based health-screening platform. This health-screening platform will enable patients to perform medical self-diagnosis over the Internet. Historical data has shown the importance of early detection to ensure patients receive proper treatment and speedy recovery. The platform is designed with special emphasis on the user interface. Standard Web-based user-interface design is adopted so the user feels ease to operate in a familiar Web environment. In addition, graphics such as charts and graphs are used generously to help users visualize and understand the result of the diagnostic. The system is developed using hypertext preprocessor (PHP) programming language. One important feature of this system platform is that it is built to be a stand-alone platform, which tends to have better user privacy security. The prototype system platform was developed by the National Cheng Kung University Ergonomic and Design Laboratory. The completed prototype of this system platform was submitted to the Taiwan Medical Institute for evaluation. The evaluation of 120 participants showed that this platform system is a highly effective tool in health-screening applications, and has great potential for improving the medical care quality for the general public.
Statistical modeling for visualization evaluation through data fusion.
Chen, Xiaoyu; Jin, Ran
2017-11-01
There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.
The man/machine interface in information retrieval: Providing access to the casual user
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Granier, Martin
1984-01-01
This study is concerned with the difficulties encountered by casual users wishing to employ Information Storage and Retrieval Systems. A casual user is defined as a professional who has neither time nor desire to pursue in depth the study of the numerous and varied retrieval systems. His needs for on-line search are only occasional, and not limited to any particular system. The paper takes a close look at the state of the art of research concerned with aiding casual users of Information Storage and Retrieval Systems. Current experiments such as LEXIS, CONIT, IIDA, CITE, and CCL are presented and discussed. Comments and proposals are offered, specifically in the areas of training, learning and cost as experienced by the casual user. An extensive bibliography of recent works on the subject follows the text.
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
Transportable Applications Environment Plus, Version 5.1
NASA Technical Reports Server (NTRS)
1994-01-01
Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
.... Purpose The Exchange is proposing to adopt the new QView service. QView is a web-based, front-end... dashboard interface. The dashboard also allows a QView subscriber to track his/her executions and open...-based tool that, among other things, allows users access to all of the BX order and execution...
Reasoning about Users' Actions in a Graphical User Interface.
ERIC Educational Resources Information Center
Virvou, Maria; Kabassi, Katerina
2002-01-01
Describes a graphical user interface called IFM (Intelligent File Manipulator) that provides intelligent help to users. Explains two underlying reasoning mechanisms, one an adaptation of human plausible reasoning and one that performs goal recognition based on the effects of users' commands; and presents results of an empirical study that…
Use of force feedback to enhance graphical user interfaces
NASA Astrophysics Data System (ADS)
Rosenberg, Louis B.; Brave, Scott
1996-04-01
This project focuses on the use of force feedback sensations to enhance user interaction with standard graphical user interface paradigms. While typical joystick and mouse devices are input-only, force feedback controllers allow physical sensations to be reflected to a user. Tasks that require users to position a cursor on a given target can be enhanced by applying physical forces to the user that aid in targeting. For example, an attractive force field implemented at the location of a graphical icon can greatly facilitate target acquisition and selection of the icon. It has been shown that force feedback can enhance a users ability to perform basic functions within graphical user interfaces.
Influence of Learning Styles on Graphical User Interface Preferences for e-Learners
ERIC Educational Resources Information Center
Dedic, Velimir; Markovic, Suzana
2012-01-01
Implementing Web-based educational environment requires not only developing appropriate architectures, but also incorporating human factors considerations. User interface becomes the major channel to convey information in e-learning context: a well-designed and friendly enough interface is thus the key element in helping users to get the best…
ERIC Educational Resources Information Center
Joo, Young Ju; Lee, Hyeon Woo; Ham, Yookyoung
2014-01-01
This study aims to add new variables, namely user interface, personal innovativeness, and satisfaction in learning, to Davis's technology acceptance model and also examine whether learners are willing to adopt mobile learning. Thus, this study attempted to explain the structural causal relationships among user interface, personal…
Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.
ERIC Educational Resources Information Center
Crehange, M.; And Others
1989-01-01
Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…
SWATMOD-PREP: Graphical user interface for preparing coupled SWAT-modflow simulations
USDA-ARS?s Scientific Manuscript database
This paper presents SWATMOD-Prep, a graphical user interface that couples a SWAT watershed model with a MODFLOW groundwater flow model. The interface is based on a recently published SWAT-MODFLOW code that couples the models via mapping schemes. The spatial layout of SWATMOD-Prep guides the user t...
Reflections on Andes' Goal-Free User Interface
ERIC Educational Resources Information Center
VanLehn, Kurt
2016-01-01
Although the Andes project produced many results over its 18 years of activity, this commentary focuses on its contributions to understanding how a goal-free user interface impacts the overall design and performance of a step-based tutoring system. Whereas a goal-aligned user interface displays relevant goals as blank boxes or empty locations that…
Business Performer-Centered Design of User Interfaces
NASA Astrophysics Data System (ADS)
Sousa, Kênia; Vanderdonckt, Jean
Business Performer-Centered Design of User Interfaces is a new design methodology that adopts business process (BP) definition and a business performer perspective for managing the life cycle of user interfaces of enterprise systems. In this methodology, when the organization has a business process culture, the business processes of an organization are firstly defined according to a traditional methodology for this kind of artifact. These business processes are then transformed into a series of task models that represent the interactive parts of the business processes that will ultimately lead to interactive systems. When the organization has its enterprise systems, but not yet its business processes modeled, the user interfaces of the systems help derive tasks models, which are then used to derive the business processes. The double linking between a business process and a task model, and between a task model and a user interface model makes it possible to ensure traceability of the artifacts in multiple paths and enables a more active participation of business performers in analyzing the resulting user interfaces. In this paper, we outline how a human-perspective is used tied to a model-driven perspective.
A Hybrid 2D/3D User Interface for Radiological Diagnosis.
Mandalika, Veera Bhadra Harish; Chernoglazov, Alexander I; Billinghurst, Mark; Bartneck, Christoph; Hurrell, Michael A; Ruiter, Niels de; Butler, Anthony P H; Butler, Philip H
2018-02-01
This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.
Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme.
Zhu, Hui; Gao, Lijuan; Li, Hui
2016-02-01
With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users' personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users' query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users' queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs.
Literature Review on Needs of Upper Limb Prosthesis Users.
Cordella, Francesca; Ciancio, Anna Lisa; Sacchetti, Rinaldo; Davalli, Angelo; Cutti, Andrea Giovanni; Guglielmelli, Eugenio; Zollo, Loredana
2016-01-01
The loss of one hand can significantly affect the level of autonomy and the capability of performing daily living, working and social activities. The current prosthetic solutions contribute in a poor way to overcome these problems due to limitations in the interfaces adopted for controlling the prosthesis and to the lack of force or tactile feedback, thus limiting hand grasp capabilities. This paper presents a literature review on needs analysis of upper limb prosthesis users, and points out the main critical aspects of the current prosthetic solutions, in terms of users satisfaction and activities of daily living they would like to perform with the prosthetic device. The ultimate goal is to provide design inputs in the prosthetic field and, contemporary, increase user satisfaction rates and reduce device abandonment. A list of requirements for upper limb prostheses is proposed, grounded on the performed analysis on user needs. It wants to (i) provide guidelines for improving the level of acceptability and usefulness of the prosthesis, by accounting for hand functional and technical aspects; (ii) propose a control architecture of PNS-based prosthetic systems able to satisfy the analyzed user wishes; (iii) provide hints for improving the quality of the methods (e.g., questionnaires) adopted for understanding the user satisfaction with their prostheses.
Literature Review on Needs of Upper Limb Prosthesis Users
Cordella, Francesca; Ciancio, Anna Lisa; Sacchetti, Rinaldo; Davalli, Angelo; Cutti, Andrea Giovanni; Guglielmelli, Eugenio; Zollo, Loredana
2016-01-01
The loss of one hand can significantly affect the level of autonomy and the capability of performing daily living, working and social activities. The current prosthetic solutions contribute in a poor way to overcome these problems due to limitations in the interfaces adopted for controlling the prosthesis and to the lack of force or tactile feedback, thus limiting hand grasp capabilities. This paper presents a literature review on needs analysis of upper limb prosthesis users, and points out the main critical aspects of the current prosthetic solutions, in terms of users satisfaction and activities of daily living they would like to perform with the prosthetic device. The ultimate goal is to provide design inputs in the prosthetic field and, contemporary, increase user satisfaction rates and reduce device abandonment. A list of requirements for upper limb prostheses is proposed, grounded on the performed analysis on user needs. It wants to (i) provide guidelines for improving the level of acceptability and usefulness of the prosthesis, by accounting for hand functional and technical aspects; (ii) propose a control architecture of PNS-based prosthetic systems able to satisfy the analyzed user wishes; (iii) provide hints for improving the quality of the methods (e.g., questionnaires) adopted for understanding the user satisfaction with their prostheses. PMID:27242413
Projection Mapping User Interface for Disabled People
Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827
Human perceptual deficits as factors in computer interface test and evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowser, S.E.
1992-06-01
Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less
Projection Mapping User Interface for Disabled People.
Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
Development of a graphical user interface for the global land information system (GLIS)
Alstad, Susan R.; Jackson, David A.
1993-01-01
The process of developing a Motif Graphical User Interface for the Global Land Information System (GLIS) involved incorporating user requirements, in-house visual and functional design requirements, and Open Software Foundation (OSF) Motif style guide standards. Motif user interface windows have been developed using the software to support Motif window functions war written using the C programming language. The GLIS architecture was modified to support multiple servers and remote handlers running the X Window System by forming a network of servers and handlers connected by TCP/IP communications. In April 1993, prior to release the GLIS graphical user interface and system architecture modifications were test by developers and users located at the EROS Data Center and 11 beta test sites across the country.
Language workbench user interfaces for data analysis
Benson, Victoria M.
2015-01-01
Biological data analysis is frequently performed with command line software. While this practice provides considerable flexibility for computationally savy individuals, such as investigators trained in bioinformatics, this also creates a barrier to the widespread use of data analysis software by investigators trained as biologists and/or clinicians. Workflow systems such as Galaxy and Taverna have been developed to try and provide generic user interfaces that can wrap command line analysis software. These solutions are useful for problems that can be solved with workflows, and that do not require specialized user interfaces. However, some types of analyses can benefit from custom user interfaces. For instance, developing biomarker models from high-throughput data is a type of analysis that can be expressed more succinctly with specialized user interfaces. Here, we show how Language Workbench (LW) technology can be used to model the biomarker development and validation process. We developed a language that models the concepts of Dataset, Endpoint, Feature Selection Method and Classifier. These high-level language concepts map directly to abstractions that analysts who develop biomarker models are familiar with. We found that user interfaces developed in the Meta-Programming System (MPS) LW provide convenient means to configure a biomarker development project, to train models and view the validation statistics. We discuss several advantages of developing user interfaces for data analysis with a LW, including increased interface consistency, portability and extension by language composition. The language developed during this experiment is distributed as an MPS plugin (available at http://campagnelab.org/software/bdval-for-mps/). PMID:25755929
A hybrid BCI for enhanced control of a telepresence robot.
Carlson, Tom; Tonin, Luca; Perdikis, Serafeim; Leeb, Robert; del R Millán, José
2013-01-01
Motor-disabled end users have successfully driven a telepresence robot in a complex environment using a Brain-Computer Interface (BCI). However, to facilitate the interaction aspect that underpins the notion of telepresence, users must be able to voluntarily and reliably stop the robot at any moment, not just drive from point to point. In this work, we propose to exploit the user's residual muscular activity to provide a fast and reliable control channel, which can start/stop the telepresence robot at any moment. Our preliminary results show that not only does this hybrid approach increase the accuracy, but it also helps to reduce the workload and was the preferred control paradigm of all the participants.
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1991-01-01
The Transportable Applications Environment (TAE) Plus, developed at GSFC, is an advanced portable user interface development environment which simplifies the process of creating and managing complex application graphical user interfaces (GUI's), supports prototyping, allows applications to be ported easily between different platforms and encourages appropriate levels of user interface consistency between applications. The following topics are discussed: the capabilities of the TAE Plus tool; how the implementation has utilized state-of-the-art technologies within graphic workstations; and how it has been used both within and outside of NASA.
NASA Technical Reports Server (NTRS)
Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.
1993-01-01
This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.
Remapping residual coordination for controlling assistive devices and recovering motor functions
Pierella, Camilla; Abdollahi, Farnaz; Farshchiansadegh, Ali; Pedersen, Jessica; Thorp, Elias; Mussa-Ivaldi, Ferdinando A.; Casadio, Maura
2015-01-01
The concept of human motor redundancy attracted much attention since the early studies of motor control, as it highlights the ability of the motor system to generate a great variety of movements to achieve any single well-defined goal. The abundance of degrees of freedom in the human body may be a fundamental resource in the learning and remapping problems that are encountered in human–machine interfaces (HMIs) developments. The HMI can act at different levels decoding brain signals or body signals to control an external device. The transformation from neural signals to device commands is the core of research on brain-machine interfaces (BMIs). However, while BMIs bypass completely the final path of the motor system, body-machine interfaces (BoMIs) take advantage of motor skills that are still available to the user and have the potential to enhance these skills through their consistent use. BoMIs empower people with severe motor disabilities with the possibility to control external devices, and they concurrently offer the opportunity to focus on achieving rehabilitative goals. In this study we describe a theoretical paradigm for the use of a BoMI in rehabilitation. The proposed BoMI remaps the user’s residual upper body mobility to the two coordinates of a cursor on a computer screen. This mapping is obtained by principal component analysis (PCA). We hypothesize that the BoMI can be specifically programmed to engage the users in functional exercises aimed at partial recovery of motor skills, while simultaneously controlling the cursor and carrying out functional tasks, e.g. playing games. Specifically, PCA allows us to select not only the subspace that is most comfortable for the user to act upon, but also the degrees of freedom and coordination patterns that the user has more difficulty engaging. In this article, we describe a family of map modifications that can be made to change the motor behavior of the user. Depending on the characteristics of the impairment of each high-level spinal cord injury (SCI) survivor, we can make modifications to restore a higher level of symmetric mobility (left versus right), or to increase the strength and range of motion of the upper body that was spared by the injury. Results showed that this approach restored symmetry between left and right side of the body, with an increase of mobility and strength of all the degrees of freedom in the participants involved in the control of the interface. This is a proof of concept that our BoMI may be used concurrently to control assistive devices and reach specific rehabilitative goals. Engaging the users in functional and entertaining tasks while practicing the interface and changing the map in the proposed ways is a novel approach to rehabilitation treatments facilitated by portable and low-cost technologies. PMID:26341935
A new relational database structure and online interface for the HITRAN database
NASA Astrophysics Data System (ADS)
Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan
2013-11-01
A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.
System for assisted mobility using eye movements based on electrooculography.
Barea, Rafael; Boquete, Luciano; Mazo, Manuel; López, Elena
2002-12-01
This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Urine collection apparatus. [feminine hygiene
NASA Technical Reports Server (NTRS)
Michaud, R. B. (Inventor)
1981-01-01
A urine collection device for females comprises an interface body with an interface surface for engagement with the user's body. The interface body comprises a forward portion defining a urine-receiving bore which has an inlet in the interface surface adapted to be disposed in surrounding relation to the urethral opening of the user. The interface body also has a rear portion integrally adjoining the forward portion and a non-invasive vaginal seal on the interface surface for sealing the vagina of the user from communication with the urine-receiving bore. An absorbent pad is removably supported on the interface body and extends laterally therefrom. A garment for supporting the urine collection is also disclosed.
Teleoperation of Robonaut Using Finger Tracking
NASA Technical Reports Server (NTRS)
Champoux, Rachel G.; Luo, Victor
2012-01-01
With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.
Yuan, Michael Juntao; Finley, George Mike; Long, Ju; Mills, Christy; Johnson, Ron Kim
2013-01-31
Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. The evaluation has shown that our design was functional and met the requirements demanded by the nurses' tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction.
Finding and Exploring Health Information with a Slider-Based User Interface.
Pang, Patrick Cheong-Iao; Verspoor, Karin; Pearce, Jon; Chang, Shanton
2016-01-01
Despite the fact that search engines are the primary channel to access online health information, there are better ways to find and explore health information on the web. Search engines are prone to problems when they are used to find health information. For instance, users have difficulties in expressing health scenarios with appropriate search keywords, search results are not optimised for medical queries, and the search process does not account for users' literacy levels and reading preferences. In this paper, we describe our approach to addressing these problems by introducing a novel design using a slider-based user interface for discovering health information without the need for precise search keywords. The user evaluation suggests that the interface is easy to use and able to assist users in the process of discovering new information. This study demonstrates the potential value of adopting slider controls in the user interface of health websites for navigation and information discovery.
Rapid Prototyping of Hydrologic Model Interfaces with IPython
NASA Astrophysics Data System (ADS)
Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.
2014-12-01
A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.
Weather information network including graphical display
NASA Technical Reports Server (NTRS)
Leger, Daniel R. (Inventor); Burdon, David (Inventor); Son, Robert S. (Inventor); Martin, Kevin D. (Inventor); Harrison, John (Inventor); Hughes, Keith R. (Inventor)
2006-01-01
An apparatus for providing weather information onboard an aircraft includes a processor unit and a graphical user interface. The processor unit processes weather information after it is received onboard the aircraft from a ground-based source, and the graphical user interface provides a graphical presentation of the weather information to a user onboard the aircraft. Preferably, the graphical user interface includes one or more user-selectable options for graphically displaying at least one of convection information, turbulence information, icing information, weather satellite information, SIGMET information, significant weather prognosis information, and winds aloft information.
The application of connectionism to query planning/scheduling in intelligent user interfaces
NASA Technical Reports Server (NTRS)
Short, Nicholas, Jr.; Shastri, Lokendra
1990-01-01
In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.
Water flow algorithm decision support tool for travelling salesman problem
NASA Astrophysics Data System (ADS)
Kamarudin, Anis Aklima; Othman, Zulaiha Ali; Sarim, Hafiz Mohd
2016-08-01
This paper discuss about the role of Decision Support Tool in Travelling Salesman Problem (TSP) for helping the researchers who doing research in same area will get the better result from the proposed algorithm. A study has been conducted and Rapid Application Development (RAD) model has been use as a methodology which includes requirement planning, user design, construction and cutover. Water Flow Algorithm (WFA) with initialization technique improvement is used as the proposed algorithm in this study for evaluating effectiveness against TSP cases. For DST evaluation will go through usability testing conducted on system use, quality of information, quality of interface and overall satisfaction. Evaluation is needed for determine whether this tool can assists user in making a decision to solve TSP problems with the proposed algorithm or not. Some statistical result shown the ability of this tool in term of helping researchers to conduct the experiments on the WFA with improvements TSP initialization.
The User Interface: How Does Your Product Look and Feel?
ERIC Educational Resources Information Center
Strukhoff, Roger
1987-01-01
Discusses the importance of user cordial interfaces to the successful marketing of optical data disk products, and describes features of several online systems. The topics discussed include full text searching, indexed searching, menu driven interfaces, natural language interfaces, computer graphics, and possible future developments. (CLB)
ORBIT: an integrated environment for user-customized bioinformatics tools.
Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M
1999-10-01
There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.
Ahmed, Adil; Chandra, Subhash; Herasevich, Vitaly; Gajic, Ognjen; Pickering, Brian W
2011-07-01
The care of critically ill patients generates large quantities of data. Increasingly, these data are presented to the provider within an electronic medical record. The manner in which data are organized and presented can impact on the ability of users to synthesis that data into meaningful information. The objective of this study was to test the hypothesis that novel user interfaces, which prioritize the display of high-value data to providers within system-based packages, reduce task load, and result in fewer errors of cognition compared with established user interfaces that do not. Randomized crossover study. Academic tertiary referral center. Attending, resident and fellow critical care physicians. Novel health care record user interface. Subjects randomly assigned to either a standard electronic medical record or a novel user interface, were asked to perform a structured task. The task required the subjects to use the assigned electronic environment to review the medical record of an intensive care unit patient said to be actively bleeding for data that formed the basis of answers to clinical questions posed in the form of a structured questionnaire. The primary outcome was task load, measured using the paper version of the NASA-task load index. Secondary outcome measures included time to task completion, number of errors of cognition measured by comparison of subject to post hoc gold standard questionnaire responses, and the quantity of information presented to subjects by each environment. Twenty subjects completed the task on eight patients, resulting in 160 patient-provider encounters (80 in each group). The standard electronic medical record contained a much larger data volume with a median (interquartile range) number of data points per patient of 1008 (895-1183) compared with 102 (77-112) contained within the novel user interface. The median (interquartile range) NASA-task load index values were 38.8 (32-45) and 58 (45-65) for the novel user interface compared with the standard electronic medical record (p < .001). The median (interquartile range) times in seconds taken to complete the task for four consecutive patients were 93 (57-132), 60 (48-71), 68 (48-80), and 54 (42-64) for the novel user interface compared with 145 (109-201), 125 (113-162), 129 (100-145), and 112 (92-123) for the standard interface (p < .0001), respectively. The median (interquartile range) number of errors per provider was 0.5 (0-1) and two (0.25-3) for the novel user interface and standard electronic medical record interface, respectively (p = .007). A novel user interface was designed based on the information needs of intensive care unit providers with a specific goal of development being the reduction of task load and errors of cognition associated with filtering, extracting, and using medical data contained within a comprehensive electronic medical record. The results of this simulated clinical experiment suggest that the configuration of the intensive care unit user interface contributes significantly to the task load, time to task completion, and number of errors of cognition associated with the identification, and subsequent use, of relevant patient data. Task-specific user interfaces, developed from an understanding of provider information requirements, offer advantages over interfaces currently available within a standard electronic medical record.
Latent Factors Limiting the Performance of sEMG-Interfaces
Lobov, Sergey; Krilova, Nadia; Kazantsev, Victor
2018-01-01
Recent advances in recording and real-time analysis of surface electromyographic signals (sEMG) have fostered the use of sEMG human–machine interfaces for controlling personal computers, prostheses of upper limbs, and exoskeletons among others. Despite a relatively high mean performance, sEMG-interfaces still exhibit strong variance in the fidelity of gesture recognition among different users. Here, we systematically study the latent factors determining the performance of sEMG-interfaces in synthetic tests and in an arcade game. We show that the degree of muscle cooperation and the amount of the body fatty tissue are the decisive factors in synthetic tests. Our data suggest that these factors can only be adjusted by long-term training, which promotes fine-tuning of low-level neural circuits driving the muscles. Short-term training has no effect on synthetic tests, but significantly increases the game scoring. This implies that it works at a higher decision-making level, not relevant for synthetic gestures. We propose a procedure that enables quantification of the gestures’ fidelity in a dynamic gaming environment. For each individual subject, the approach allows identifying “problematic” gestures that decrease gaming performance. This information can be used for optimizing the training strategy and for adapting the signal processing algorithms to individual users, which could be a way for a qualitative leap in the development of future sEMG-interfaces. PMID:29642410
Latent Factors Limiting the Performance of sEMG-Interfaces.
Lobov, Sergey; Krilova, Nadia; Kastalskiy, Innokentiy; Kazantsev, Victor; Makarov, Valeri A
2018-04-06
Recent advances in recording and real-time analysis of surface electromyographic signals (sEMG) have fostered the use of sEMG human-machine interfaces for controlling personal computers, prostheses of upper limbs, and exoskeletons among others. Despite a relatively high mean performance, sEMG-interfaces still exhibit strong variance in the fidelity of gesture recognition among different users. Here, we systematically study the latent factors determining the performance of sEMG-interfaces in synthetic tests and in an arcade game. We show that the degree of muscle cooperation and the amount of the body fatty tissue are the decisive factors in synthetic tests. Our data suggest that these factors can only be adjusted by long-term training, which promotes fine-tuning of low-level neural circuits driving the muscles. Short-term training has no effect on synthetic tests, but significantly increases the game scoring. This implies that it works at a higher decision-making level, not relevant for synthetic gestures. We propose a procedure that enables quantification of the gestures' fidelity in a dynamic gaming environment. For each individual subject, the approach allows identifying "problematic" gestures that decrease gaming performance. This information can be used for optimizing the training strategy and for adapting the signal processing algorithms to individual users, which could be a way for a qualitative leap in the development of future sEMG-interfaces.
NASA Technical Reports Server (NTRS)
Lyons, J. T.; Borchers, William R.
1993-01-01
Documentation for the User Interface Program for the Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) is provided. The User Interface Program is a separate software package designed to ease the user input requirements when using the MASTRE Trajectory Program. This document supplements documentation on the MASTRE Program that consists of the MASTRE Engineering Manual and the MASTRE Programmers Guide. The User Interface Program provides a series of menus and tables using the VAX Screen Management Guideline (SMG) software. These menus and tables allow the user to modify the MASTRE Program input without the need for learning the various program dependent mnemonics. In addition, the User Interface Program allows the user to modify and/or review additional input Namelist and data files, to build and review command files, to formulate and calculate mass properties related data, and to have a plotting capability.
Development of a Mobile User Interface for Image-based Dietary Assessment
Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.
2011-01-01
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755
1985-11-01
User Interface that consists of a set of callable execution time routines available to an application program for form processing . IISS Function Screen...provisions for test consists of the normal testing techniques that are accomplished during the construction process . They consist of design and code...application presents a form * to the user which must be filled in with information for processing by that application. The application then
An Object-Oriented Graphical User Interface for a Reusable Rocket Engine Intelligent Control System
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Musgrave, Jeffrey L.; Guo, Ten-Huei; Paxson, Daniel E.; Wong, Edmond; Saus, Joseph R.; Merrill, Walter C.
1994-01-01
An intelligent control system for reusable rocket engines under development at NASA Lewis Research Center requires a graphical user interface to allow observation of the closed-loop system in operation. The simulation testbed consists of a real-time engine simulation computer, a controls computer, and several auxiliary computers for diagnostics and coordination. The system is set up so that the simulation computer could be replaced by the real engine and the change would be transparent to the control system. Because of the hard real-time requirement of the control computer, putting a graphical user interface on it was not an option. Thus, a separate computer used strictly for the graphical user interface was warranted. An object-oriented LISP-based graphical user interface has been developed on a Texas Instruments Explorer 2+ to indicate the condition of the engine to the observer through plots, animation, interactive graphics, and text.
The design and evaluation of an activity monitoring user interface for people with stroke.
Hart, Phil; Bierwirth, Rebekah; Fulk, George; Sazonov, Edward
2014-01-01
Usability is an important topic in the field of telerehabilitation research. Older users with disabilities in particular, present age-related and disability-related challenges that should be accommodated for in the design of a user interface for a telerehabilitation system. This paper describes the design, implementation, and assessment of a telerehabilitation system user interface that tries to maximize usability for an elderly user who has experienced a stroke. An Internet-connected Nintendo(®) Wii™ gaming system is selected as a hardware platform, and a server and website are implemented to process and display the feedback information. The usability of the interface is assessed with a trial consisting of 18 subjects: 10 healthy Doctor of Physical Therapy students and 8 people with a stroke. Results show similar levels of usability and high satisfaction with the gaming system interface from both groups of subjects.
Gromita: a fully integrated graphical user interface to gromacs 4.
Sellis, Diamantis; Vlachakis, Dimitrios; Vlassi, Metaxia
2009-09-07
Gromita is a fully integrated and efficient graphical user interface (GUI) to the recently updated molecular dynamics suite Gromacs, version 4. Gromita is a cross-platform, perl/tcl-tk based, interactive front end designed to break the command line barrier and introduce a new user-friendly environment to run molecular dynamics simulations through Gromacs. Our GUI features a novel workflow interface that guides the user through each logical step of the molecular dynamics setup process, making it accessible to both advanced and novice users. This tool provides a seamless interface to the Gromacs package, while providing enhanced functionality by speeding up and simplifying the task of setting up molecular dynamics simulations of biological systems. Gromita can be freely downloaded from http://bio.demokritos.gr/gromita/.
Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.; Thurman, David A.
1999-01-01
AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the case base, the user interface (including the category structure), interface with the current DAAC Help System, the development of tools to add new cases, and user testing and evaluation at (perhaps) the Goddard DAAC.
Robust artifactual independent component classification for BCI practitioners.
Winkler, Irene; Brandl, Stephanie; Horn, Franziska; Waldburger, Eric; Allefeld, Carsten; Tangermann, Michael
2014-06-01
EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain-computer interfaces (BCIs). Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.
2014-01-01
Background Clinical practice guidelines are useful for physicians, and guidelines are available on the Internet from various websites such as Vidal Recos. However, these guidelines are long and difficult to read, especially during consultation. Similar difficulties have been encountered with drug summaries of product characteristics. In a previous work, we have proposed an iconic language (called VCM, for Visualization of Concepts in Medicine) for representing patient conditions, treatments and laboratory tests, and we have used these icons to design a user interface that graphically indexes summaries of product characteristics. In the current study, our objective was to design and evaluate an iconic user interface for the consultation of clinical practice guidelines by physicians. Methods Focus groups of physicians were set up to identify the difficulties encountered when reading guidelines. Icons were integrated into Vidal Recos, taking human factors into account. The resulting interface includes a graphical summary and an iconic indexation of the guideline. The new interface was evaluated. We compared the response times and the number of errors recorded when physicians answered questions about two clinical scenarios using the interactive iconic interface or a textual interface. Users’ perceived usability was evaluated with the System Usability Scale. Results The main difficulties encountered by physicians when reading guidelines were obtaining an overview and finding recommendations for patients corresponding to “particular cases”. We designed a graphical interface for guideline consultation, using icons to identify particular cases and providing a graphical summary of the icons organized by anatomy and etiology. The evaluation showed that physicians gave clinical responses more rapidly with the iconic interface than the textual interface (25.2 seconds versus 45.6, p < 0.05). The physicians appreciated the new interface, and the System Usability Scale score value was 75 (between good and excellent). Conclusion An interactive iconic interface can provide physicians with an overview of clinical practice guidelines, and can decrease the time required to access the content of such guidelines. PMID:25158762
The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces
NASA Astrophysics Data System (ADS)
JianPing, Gu
The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.
Development of a simulated smart pump interface.
Elias, Beth L; Moss, Jacqueline A; Shih, Alan; Dillavou, Marcus
2014-01-01
Medical device user interfaces are increasingly complex, resulting in a need for evaluation in clinicallyaccurate settings. Simulation of these interfaces can allow for evaluation, training, and use for research without the risk of harming patients and with a significant cost reduction over using the actual medical devices. This pilot project was phase 1 of a study to define and evaluate a methodology for development of simulated medical device interface technology to be used for education, device development, and research. Digital video and audio recordings of interface interactions were analyzed to develop a model of a smart intravenous medication infusion pump user interface. This model was used to program a high-fidelity simulated smart intravenous medication infusion pump user interface on an inexpensive netbook platform.
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1991-01-01
The transportable Applications Environment Plus (TAE Plus), developed at the NASA Goddard Space FLight Center, is a portable, What you see is what you get (WYSIWYG) user interface development and management system. Its primary objective is to provide an integrated software environment that allows interactive prototyping and development of graphical user interfaces, as well as management of the user interface within the operational domain. TAE Plus is being applied to many types of applications, and what TAE Plus provides, how the implementation has utilizes state-of-the-art technologies within graphic workstations, and how it has been used both within and without NASA are discussed.
MuSim, a Graphical User Interface for Multiple Simulation Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland
2016-06-01
MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less
Design and Implementation of a Set-Top Box-Based Homecare System Using Hybrid Cloud.
Lin, Bor-Shing; Hsiao, Pei-Chi; Cheng, Po-Hsun; Lee, I-Jung; Jan, Gene Eu
2015-11-01
Telemedicine has become a prevalent topic in recent years, and several telemedicine systems have been proposed; however, such systems are an unsuitable fit for the daily requirements of users. The system proposed in this study was developed as a set-top box integrated with the Android™ (Google, Mountain View, CA) operating system to provide a convenient and user-friendly interface. The proposed system can assist with family healthcare management, telemedicine service delivery, and information exchange among hospitals. To manage the system, a novel type of hybrid cloud architecture was also developed. Updated information is stored on a public cloud, enabling medical staff members to rapidly access information when diagnosing patients. In the long term, the stored data can be reduced to improve the efficiency of the database. The proposed design offers a robust architecture for storing data in a homecare system and can thus resolve network overload and congestion resulting from accumulating data, which are inherent problems in centralized architectures, thereby improving system efficiency.
Workshop AccessibleTV "Accessible User Interfaces for Future TV Applications"
NASA Astrophysics Data System (ADS)
Hahn, Volker; Hamisu, Pascal; Jung, Christopher; Heinrich, Gregor; Duarte, Carlos; Langdon, Pat
Approximately half of the elderly people over 55 suffer from some type of typically mild visual, auditory, motor or cognitive impairment. For them interaction, especially with PCs and other complex devices is sometimes challenging, although accessible ICT applications could make much of a difference for their living quality. Basically they have the potential to enable or simplify participation and inclusion in their surrounding private and professional communities. However, the availability of accessible user interfaces being capable to adapt to the specific needs and requirements of users with individual impairments is very limited. Although there are a number of APIs [1, 2, 3, 4] available for various platforms that allow developers to provide accessibility features within their applications, today none of them provides features for the automatic adaptation of multimodal interfaces being capable to automatically fit the individual requirements of users with different kinds of impairments. Moreover, the provision of accessible user interfaces is still expensive and risky for application developers, as they need special experience and effort for user tests. Today many implementations simply neglect the needs of elderly people, thus locking out a large portion of their potential users. The workshop is organized as part of the dissemination activity for the European-funded project GUIDE "Gentle user interfaces for elderly people", which aims to address this situation with a comprehensive approach for the realization of multimodal user interfaces being capable to adapt to the needs of users with different kinds of mild impairments. As application platform, GUIDE will mainly target TVs and Set-Top Boxes, such as the emerging Connected-TV or WebTV platforms, as they have the potential to address the needs of the elderly users with applications such as for home automation, communication or continuing education.
Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J.; Nijsten, Maarten W.
2017-01-01
The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface. PMID:28800617
Doesburg, Frank; Cnossen, Fokie; Dieperink, Willem; Bult, Wouter; de Smet, Anne Marie; Touw, Daan J; Nijsten, Maarten W
2017-01-01
The objective of this study was to assess the usability benefits of adding a bedside central control interface that controls all intravenous (IV) infusion pumps compared to the conventional individual control of multiple infusion pumps. Eighteen dedicated ICU nurses volunteered in a between-subjects task-based usability test. A newly developed central control interface was compared to conventional control of multiple infusion pumps in a simulated ICU setting. Task execution time, clicks, errors and questionnaire responses were evaluated. Overall the central control interface outperformed the conventional control in terms of fewer user actions (40±3 vs. 73±20 clicks, p<0.001) and fewer user errors (1±1 vs. 3±2 errors, p<0.05), with no difference in task execution times (421±108 vs. 406±119 seconds, not significant). Questionnaires indicated a significant preference for the central control interface. Despite being novice users of the central control interface, ICU nurses displayed improved performance with the central control interface compared to the conventional interface they were familiar with. We conclude that the new user interface has an overall better usability than the conventional interface.
1991-09-01
iv III. THE ANALYTIC HIERARCHY PROCESS ..... ........ 15 A. INTRODUCTION ...... ................. 15 B. THE AHP PROCESS ...... ................ 16 C...INTRODUCTION ...... ................. 26 B. IMPLEMENTATION OF CERTS USING AHP ........ .. 27 1. Consistency ...... ................ 29 2. User Interface...the proposed technique into a Decision Support System. Expert Choice implements the Analytic Hierarchy Process ( AHP ), an approach to multi- criteria
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
...'' is a member of BATS Options registered with the Exchange for the purpose of making markets in options...). The proposed bulk-quoting market making interface will be used by Users to submit and update their... promoting just and equitable principles of trade, by making available on an equal basis a new market making...
BFEE: A User-Friendly Graphical Interface Facilitating Absolute Binding Free-Energy Calculations.
Fu, Haohao; Gumbart, James C; Chen, Haochuan; Shao, Xueguang; Cai, Wensheng; Chipot, Christophe
2018-03-26
Quantifying protein-ligand binding has attracted the attention of both theorists and experimentalists for decades. Many methods for estimating binding free energies in silico have been reported in recent years. Proper use of the proposed strategies requires, however, adequate knowledge of the protein-ligand complex, the mathematical background for deriving the underlying theory, and time for setting up the simulations, bookkeeping, and postprocessing. Here, to minimize human intervention, we propose a toolkit aimed at facilitating the accurate estimation of standard binding free energies using a geometrical route, coined the binding free-energy estimator (BFEE), and introduced it as a plug-in of the popular visualization program VMD. Benefitting from recent developments in new collective variables, BFEE can be used to generate the simulation input files, based solely on the structure of the complex. Once the simulations are completed, BFEE can also be utilized to perform the post-treatment of the free-energy calculations, allowing the absolute binding free energy to be estimated directly from the one-dimensional potentials of mean force in simulation outputs. The minimal amount of human intervention required during the whole process combined with the ergonomic graphical interface makes BFEE a very effective and practical tool for the end-user.
Spatio-Temporal EEG Models for Brain Interfaces
Gonzalez-Navarro, P.; Moghadamfalahi, M.; Akcakaya, M.; Erdogmus, D.
2016-01-01
Multichannel electroencephalography (EEG) is widely used in non-invasive brain computer interfaces (BCIs) for user intent inference. EEG can be assumed to be a Gaussian process with unknown mean and autocovariance, and the estimation of parameters is required for BCI inference. However, the relatively high dimensionality of the EEG feature vectors with respect to the number of labeled observations lead to rank deficient covariance matrix estimates. In this manuscript, to overcome ill-conditioned covariance estimation, we propose a structure for the covariance matrices of the multichannel EEG signals. Specifically, we assume that these covariances can be modeled as a Kronecker product of temporal and spatial covariances. Our results over the experimental data collected from the users of a letter-by-letter typing BCI show that with less number of parameter estimations, the system can achieve higher classification accuracies compared to a method that uses full unstructured covariance estimation. Moreover, in order to illustrate that the proposed Kronecker product structure could enable shortening the BCI calibration data collection sessions, using Cramer-Rao bound analysis on simulated data, we demonstrate that a model with structured covariance matrices will achieve the same estimation error as a model with no covariance structure using fewer labeled EEG observations. PMID:27713590
OntoFire: an ontology-based geo-portal for wildfires
NASA Astrophysics Data System (ADS)
Kalabokidis, K.; Athanasis, N.; Vaitis, M.
2011-12-01
With the proliferation of the geospatial technologies on the Internet, the role of geo-portals (i.e. gateways to Spatial Data Infrastructures) in the area of wildfires management emerges. However, keyword-based techniques often frustrate users when looking for data of interest in geo-portal environments, while little attention has been paid to shift from the conventional keyword-based to navigation-based mechanisms. The presented OntoFire system is an ontology-based geo-portal about wildfires. Through the proposed navigation mechanisms, the relationships between the data can be discovered, which would otherwise not be possible when using conventional querying techniques alone. End users can use the browsing interface to find resources of interest by using the navigation mechanisms provided. Data providers can use the publishing interface to submit new metadata, modify metadata or removing metadata in/from the catalogue. The proposed approach can improve the discovery of valuable information that is necessary to set priorities for disaster mitigation and prevention strategies. OntoFire aspires to be a focal point of integration and management of a very large amount of information, contributing in this way to the dissemination of knowledge and to the preparedness of the operational stakeholders.
Experiments to evolve toward a tangible user interface for computer-aided design parts assembly
NASA Astrophysics Data System (ADS)
Legardeur, Jeremy; Garreau, Ludovic; Couture, Nadine
2004-05-01
In this paper, we present the concepts of the ESKUA (Experimentation of a Kinesics System Usable for Assembly) platform that allows designers to carry out the assembly of mechanical CAD (Computer Aided Design) parts. This platform, based on tangible user interface lead taking into account assembly constraints from the beginning of the design phase and especially during the phase of CAD models manipulation. Our goal is to propose a working environment where the designer is confronted with real assembly constraints which are currently masked by existing CAD software functionalities. Thus, the platform is based on the handling of physical objects, called tangible interactors, which enable having a physical perception of the assembly constraints. In this goal, we have defined a typology of interactors based on concepts proposed in Design For Assembly methods. We present here the results of studies that led to the evolution of this first interactors set. One is concerning an experiment to evaluate the cognitive aspects of the use of interactors. The other is about an analysis of existing mechanical product and fasteners. We will show how these studies lead to the evolution of the interactors based on the functional surfaces use.
Lim, Soo-Chul; Shin, Jungsoon; Kim, Seung-Chan; Park, Joonah
2015-01-01
Touchscreen interaction has become a fundamental means of controlling mobile phones and smartwatches. However, the small form factor of a smartwatch limits the available interactive surface area. To overcome this limitation, we propose the expansion of the touch region of the screen to the back of the user’s hand. We developed a touch module for sensing the touched finger position on the back of the hand using infrared (IR) line image sensors, based on the calibrated IR intensity and the maximum intensity region of an IR array. For complete touch-sensing solution, a gyroscope installed in the smartwatch is used to read the wrist gestures. The gyroscope incorporates a dynamic time warping gesture recognition algorithm for eliminating unintended touch inputs during the free motion of the wrist while wearing the smartwatch. The prototype of the developed sensing module was implemented in a commercial smartwatch, and it was confirmed that the sensed positional information of the finger when it was used to touch the back of the hand could be used to control the smartwatch graphical user interface. Our system not only affords a novel experience for smartwatch users, but also provides a basis for developing other useful interfaces. PMID:26184202
Near infrared spectroscopy based brain-computer interface
NASA Astrophysics Data System (ADS)
Ranganatha, Sitaram; Hoshi, Yoko; Guan, Cuntai
2005-04-01
A brain-computer interface (BCI) provides users with an alternative output channel other than the normal output path of the brain. BCI is being given much attention recently as an alternate mode of communication and control for the disabled, such as patients suffering from Amyotrophic Lateral Sclerosis (ALS) or "locked-in". BCI may also find applications in military, education and entertainment. Most of the existing BCI systems which rely on the brain's electrical activity use scalp EEG signals. The scalp EEG is an inherently noisy and non-linear signal. The signal is detrimentally affected by various artifacts such as the EOG, EMG, ECG and so forth. EEG is cumbersome to use in practice, because of the need for applying conductive gel, and the need for the subject to be immobile. There is an urgent need for a more accessible interface that uses a more direct measure of cognitive function to control an output device. The optical response of Near Infrared Spectroscopy (NIRS) denoting brain activation can be used as an alternative to electrical signals, with the intention of developing a more practical and user-friendly BCI. In this paper, a new method of brain-computer interface (BCI) based on NIRS is proposed. Preliminary results of our experiments towards developing this system are reported.
STScI Archive Manual, Version 7.0
NASA Astrophysics Data System (ADS)
Padovani, Paolo
1999-06-01
The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.
Concepts and implementations of natural language query systems
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1984-01-01
The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.
fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages
Hoffmann, Thomas J.; Laird, Nan M.
2009-01-01
The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291
Logical optimization for database uniformization
NASA Technical Reports Server (NTRS)
Grant, J.
1984-01-01
Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.
The Johnson Space Center management information systems: User's guide to JSCMIS
NASA Technical Reports Server (NTRS)
Bishop, Peter C.; Erickson, Lloyd
1990-01-01
The Johnson Space Center Management Information System (JSCMIS) is an interface to computer data bases at the NASA Johnson Space Center which allows an authorized user to browse and retrieve information from a variety of sources with minimum effort. The User's Guide to JSCMIS is the supplement to the JSCMIS Research Report which details the objectives, the architecture, and implementation of the interface. It is a tutorial on how to use the interface and a reference for details about it. The guide is structured like an extended JSCMIS session, describing all of the interface features and how to use them. It also contains an appendix with each of the standard FORMATs currently included in the interface. Users may review them to decide which FORMAT most suits their needs.
Ahn, Minkyu; Lee, Mijin; Choi, Jinyoung; Jun, Sung Chan
2014-01-01
In recent years, research on Brain-Computer Interface (BCI) technology for healthy users has attracted considerable interest, and BCI games are especially popular. This study reviews the current status of, and describes future directions, in the field of BCI games. To this end, we conducted a literature search and found that BCI control paradigms using electroencephalographic signals (motor imagery, P300, steady state visual evoked potential and passive approach reading mental state) have been the primary focus of research. We also conducted a survey of nearly three hundred participants that included researchers, game developers and users around the world. From this survey, we found that all three groups (researchers, developers and users) agreed on the significant influence and applicability of BCI and BCI games, and they all selected prostheses, rehabilitation and games as the most promising BCI applications. User and developer groups tended to give low priority to passive BCI and the whole head sensor array. Developers gave higher priorities to “the easiness of playing” and the “development platform” as important elements for BCI games and the market. Based on our assessment, we discuss the critical point at which BCI games will be able to progress from their current stage to widespread marketing to consumers. In conclusion, we propose three critical elements important for expansion of the BCI game market: standards, gameplay and appropriate integration. PMID:25116904
Ahn, Minkyu; Lee, Mijin; Choi, Jinyoung; Jun, Sung Chan
2014-08-11
In recent years, research on Brain-Computer Interface (BCI) technology for healthy users has attracted considerable interest, and BCI games are especially popular. This study reviews the current status of, and describes future directions, in the field of BCI games. To this end, we conducted a literature search and found that BCI control paradigms using electroencephalographic signals (motor imagery, P300, steady state visual evoked potential and passive approach reading mental state) have been the primary focus of research. We also conducted a survey of nearly three hundred participants that included researchers, game developers and users around the world. From this survey, we found that all three groups (researchers, developers and users) agreed on the significant influence and applicability of BCI and BCI games, and they all selected prostheses, rehabilitation and games as the most promising BCI applications. User and developer groups tended to give low priority to passive BCI and the whole head sensor array. Developers gave higher priorities to "the easiness of playing" and the "development platform" as important elements for BCI games and the market. Based on our assessment, we discuss the critical point at which BCI games will be able to progress from their current stage to widespread marketing to consumers. In conclusion, we propose three critical elements important for expansion of the BCI game market: standards, gameplay and appropriate integration.
Evangelista, Daniela; Zuccaro, Antonio; Lančinskas, Algirdas; Žilinskas, Julius; Guarracino, Mario R
2016-02-17
The cost per patient of next generation sequencing for detection of rare mutations may be significantly reduced using pooled experiments. Recently, some techniques have been proposed for the planning of pooled experiments and for the optimal allocation of patients into pools. However, the lack of a user friendly resource for planning the design of pooled experiments forces the scientists to do frequent, complex and long computations. OPENDoRM is a powerful collection of novel mathematical algorithms usable via an intuitive graphical user interface. It enables researchers to speed up the planning of their routine experiments, as well as, to support scientists without specific bioinformatics expertises. Users can automatically carry out analysis in terms of costs associated with the optimal allocation of patients in pools. They are also able to choose between three distinct pooling mathematical methods, each of which also suggests the optimal configuration for the submitted experiment. Importantly, in order to keep track of the performed experiments, users can save and export the results of their experiments in standard tabular and charts contents. OPENDoRM is a freely available web-oriented application for the planning of pooled NGS experiments, available at: http://www-labgtp.na.icar.cnr.it/OPENDoRM. Its easy and intuitive graphical user interface enables researchers to plan theirs experiments using novel algorithms, and to interactively visualize the results.
Clerico, Andrea; Tiwari, Abhishek; Gupta, Rishabh; Jayaraman, Srinivasan; Falk, Tiago H.
2018-01-01
The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG). Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR), have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion) and liking (decision-level fusion) prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching. PMID:29367844
Human-telerobot interactions - Information, control, and mental models
NASA Technical Reports Server (NTRS)
Smith, Randy L.; Gillan, Douglas J.
1987-01-01
A part of the NASA's Space Station will be a teleoperated robot (telerobot) with arms for grasping and manipulation, feet for holding onto objects, and television cameras for visual feedback. The objective of the work described in this paper is to develop the requirements and specifications for the user-telerobot interface and to determine through research and testing that the interface results in efficient system operation. The focus of the development of the user-telerobot interface is on the information required by the user, the user inputs, and the design of the control workstation. Closely related to both the information required by the user and the user's control of the telerobot is the user's mental model of the relationship between the control inputs and the telerobot's actions.
Land User and Land Cover Maps of Europe: a Webgis Platform
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Fahl, F. C.; Minghini, M.; Molinari, M. E.
2016-06-01
This paper presents the methods and implementation processes of a WebGIS platform designed to publish the available land use and land cover maps of Europe at continental scale. The system is built completely on open source infrastructure and open standards. The proposed architecture is based on a server-client model having GeoServer as the map server, Leaflet as the client-side mapping library and the Bootstrap framework at the core of the front-end user interface. The web user interface is designed to have typical features of a desktop GIS (e.g. activate/deactivate layers and order layers by drag and drop actions) and to show specific information on the activated layers (e.g. legend and simplified metadata). Users have the possibility to change the base map from a given list of map providers (e.g. OpenStreetMap and Microsoft Bing) and to control the opacity of each layer to facilitate the comparison with both other land cover layers and the underlying base map. In addition, users can add to the platform any custom layer available through a Web Map Service (WMS) and activate the visualization of photos from popular photo sharing services. This last functionality is provided in order to have a visual assessment of the available land coverages based on other user-generated contents available on the Internet. It is supposed to be a first step towards a calibration/validation service that will be made available in the future.
Three-dimensional user interfaces for scientific visualization
NASA Technical Reports Server (NTRS)
Vandam, Andries
1995-01-01
The main goal of this project is to develop novel and productive user interface techniques for creating and managing visualizations of computational fluid dynamics (CFD) datasets. We have implemented an application framework in which we can visualize computational fluid dynamics user interfaces. This UI technology allows users to interactively place visualization probes in a dataset and modify some of their parameters. We have also implemented a time-critical scheduling system which strives to maintain a constant frame-rate regardless of the number of visualization techniques. In the past year, we have published parts of this research at two conferences, the research annotation system at Visualization 1994, and the 3D user interface at UIST 1994. The real-time scheduling system has been submitted to SIGGRAPH 1995 conference. Copies of these documents are included with this report.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
CDROM User Interface Evaluation: The Appropriateness of GUIs.
ERIC Educational Resources Information Center
Bosch, Victoria Manglano; Hancock-Beaulieu, Micheline
1995-01-01
Assesses the appropriateness of GUIs (graphical user interfaces), more specifically Windows-based interfaces for CD-ROM. An evaluation model is described that was developed to carry out an expert evaluation of the interfaces of seven CD-ROM products. Results are discussed in light of HCI (human-computer interaction) usability criteria and design…
Cross-Cultural Interface Design and the Classroom-Learning Environment in Taiwan
ERIC Educational Resources Information Center
Chang, Chia-Lin; Su, Yelin
2012-01-01
This study examined whether using localized interface designs would make a difference in users' learning results and their perceptions of the interface design in a classroom learning environment. This study also sought to learn more about users' attitudes toward the localized interface features. To assess the impact of using localized interfaces…
Railroad track inspection interface demonstration : final report.
DOT National Transportation Integrated Search
2016-01-01
This project developed a track data user interface utilizing the Google Glass optical display device. The interface allows the user : to recall data stored remotely and view the data on the Google Glass. The technical effort required developing a com...
Preventing Shoulder-Surfing Attack with the Concept of Concealing the Password Objects' Information
Ho, Peng Foong; Kam, Yvonne Hwei-Syn; Wee, Mee Chin
2014-01-01
Traditionally, picture-based password systems employ password objects (pictures/icons/symbols) as input during an authentication session, thus making them vulnerable to “shoulder-surfing” attack because the visual interface by function is easily observed by others. Recent software-based approaches attempt to minimize this threat by requiring users to enter their passwords indirectly by performing certain mental tasks to derive the indirect password, thus concealing the user's actual password. However, weaknesses in the positioning of distracter and password objects introduce usability and security issues. In this paper, a new method, which conceals information about the password objects as much as possible, is proposed. Besides concealing the password objects and the number of password objects, the proposed method allows both password and distracter objects to be used as the challenge set's input. The correctly entered password appears to be random and can only be derived with the knowledge of the full set of password objects. Therefore, it would be difficult for a shoulder-surfing adversary to identify the user's actual password. Simulation results indicate that the correct input object and its location are random for each challenge set, thus preventing frequency of occurrence analysis attack. User study results show that the proposed method is able to prevent shoulder-surfing attack. PMID:24991649
Introduction of knowledge bases in patient's data management system: role of the user interface.
Chambrin, M C; Ravaux, P; Jaborska, A; Beugnet, C; Lestavel, P; Chopin, C; Boniface, M
1995-02-01
As the number of signals and data to be handled grows in intensive care unit, it is necessary to design more powerful computing systems that integrate and summarize all this information. The manual input of data as e.g. clinical signs and drug prescription and the synthetic representation of these data requires an ever more sophisticated user interface. The introduction of knowledge bases in the data management allows to conceive contextual interfaces. The objective of this paper is to show the importance of the design of the user interface, in the daily use of clinical information system. Then we describe a methodology that uses the man-machine interaction to capture the clinician knowledge during the clinical practice. The different steps are the audit of the user's actions, the elaboration of statistic models allowing the definition of new knowledge, and the validation that is performed before complete integration. A part of this knowledge can be used to improve the user interface. Finally, we describe the implementation of these concepts on a UNIX platform using OSF/MOTIF graphical interface.
A review of existing and potential computer user interfaces for modern radiology.
Iannessi, Antoine; Marcy, Pierre-Yves; Clatz, Olivier; Bertrand, Anne-Sophie; Sugimoto, Maki
2018-05-16
The digitalization of modern imaging has led radiologists to become very familiar with computers and their user interfaces (UI). New options for display and command offer expanded possibilities, but the mouse and keyboard remain the most commonly utilized, for usability reasons. In this work, we review and discuss different UI and their possible application in radiology. We consider two-dimensional and three-dimensional imaging displays in the context of interventional radiology, and discuss interest in touchscreens, kinetic sensors, eye detection, and augmented or virtual reality. We show that UI design specifically for radiologists is key for future use and adoption of such new interfaces. Next-generation UI must fulfil professional needs, while considering contextual constraints. • The mouse and keyboard remain the most utilized user interfaces for radiologists. • Touchscreen, holographic, kinetic sensors and eye tracking offer new possibilities for interaction. • 3D and 2D imaging require specific user interfaces. • Holographic display and augmented reality provide a third dimension to volume imaging. • Good usability is essential for adoption of new user interfaces by radiologists.
Inclusive Smartphone Interface Design in Context: Co(Re)designing the PIS.
Magee, Paul; Ward, Gillian; Moody, Louise; Roebuck, Annette
2017-01-01
User-context optimises smartphone interface-design. Neglect of user-context during development, delays or prevents marginalised-consumer benefit. Working with People with Learning Disability (PWLD) to develop interfaces refined by communication-need will improve User-Experience (UX). In research, a Participant Information Sheet (PIS) discloses planned study-activity. This paper explains co-creation of a PIS based on communication-need of PWLD.
ERIC Educational Resources Information Center
Marchionini, Gary
2002-01-01
Describes how user interfaces for the Bureau of Labor Statistics (BLS) web site evolved over a 5-year period along with the larger organizational interface and how this co-evolution has influenced the institution. Interviews with BLS staff and transaction log analysis are the foci of this study, as well as user information-seeking studies and user…
Graphical Requirements for Force Level Planning. Volume 2
1991-09-01
technology review includes graphics algorithms, computer hardware, computer software, and design methodologies. The technology can either exist today or...level graphics language. 7.4 User Interface Design Tools As user interfaces have become more sophisticated, they have become harder to develop. Xl...Setphen M. Pizer, editors. Proceedings 1986 Workshop on Interactive 31) Graphics , October 1986. 18 J. S. Dumas. Designing User Interface Software. Prentice
Image Understanding and Intelligent Parallel Systems
1991-05-09
a common user interface for the interactive , graphical manipulation of those histories, and...Circuits and Systems, August 1987. Yap, S.-K. and M.L. Scott, "PenGuin: A language for reactive graphical user interface programming," to appear, INTERACT , Cambridge, United Kingdom, 1990. 74 ...of up to a factor of 100 over single-workstation implementations. User interfaces to large multiprocessor computers are a difficult issue addressed
NASA Astrophysics Data System (ADS)
Holzinger, Andreas; Stickel, Christian; Fassold, Markus; Ebner, Martin
Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, the evaluation of consistency within a e-learning system and the ensuing eradication of irritating discrepancies in the user interface redesign is a big issue. In this paper, we report of our experiences with the Shadow Expert Technique (SET) during the evaluation of the consistency of the user interface of a large university learning management system. The main objective of this new usability evaluation method is to understand the interaction processes of end users with a specific system interface. Two teams of usability experts worked independently from each other in order to maximize the objectivity of the results. The outcome of this SET method is a list of recommended changes to improve the user interaction processes, hence to facilitate high consistency.
An augmented reality haptic training simulator for spinal needle procedures.
Sutherland, Colin; Hashtrudi-Zaad, Keyvan; Sellens, Rick; Abolmaesumi, Purang; Mousavi, Parvin
2013-11-01
This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system.
Neuromuscular interfacing: establishing an EMG-driven model for the human elbow joint.
Pau, James W L; Xie, Shane S Q; Pullan, Andrew J
2012-09-01
Assistive devices aim to mitigate the effects of physical disability by aiding users to move their limbs or by rehabilitating through therapy. These devices are commonly embodied by robotic or exoskeletal systems that are still in development and use the electromyographic (EMG) signal to determine user intent. Not much focus has been placed on developing a neuromuscular interface (NI) that solely relies on the EMG signal, and does not require modifications to the end user's state to enhance the signal (such as adding weights). This paper presents the development of a flexible, physiological model for the elbow joint that is leading toward the implementation of an NI, which predicts joint motion from EMG signals for both able-bodied and less-abled users. The approach uses musculotendon models to determine muscle contraction forces, a proposed musculoskeletal model to determine total joint torque, and a kinematic model to determine joint rotational kinematics. After a sensitivity analysis and tuning using genetic algorithms, subject trials yielded an average root-mean-square error of 6.53° and 22.4° for a single cycle and random cycles of movement of the elbow joint, respectively. This helps us to validate the elbow model and paves the way toward the development of an NI.
Framework for experimenting with QoS for multimedia services
NASA Astrophysics Data System (ADS)
Chen, Deming; Colwell, Regis; Gelman, Herschel; Chrysanthis, Panos K.; Mosse, Daniel
1996-03-01
It has been recognized that an effective support for multimedia applications must provide Quality of Service (QoS) guarantees. Current methods propose to provide such QoS guarantees through coordinated network resource reservations. In our approach, we extend this idea providing system-wide QoS guarantees that consider the data manipulation and transformations needed in the intermediate and end sites of the network. Given a user's QoS requirements, multisegment virtual channels are established with the necessary communication and computation resources reserved for the timely, synchronized, and reliable delivery of the different datatypes. Such data originate in several distributed data repositories, are transformed at intermediate service stations into suitable formats for transportation and presentation, and are delivered to a viewing unit. In this paper, we first review NETWORLD, an architecture that provides such QoS guarantees and an interface for the specification and negotiation of user-level QoS requirements. Our user interface supports both expert and non- expert modes. We then describe how to map user-level QoS requirements into low-level system parameters, leading into a contract between the application and the network. The mapping considers various characteristics of the architectures (such as the hardware and software available at each source, destination, or intermediate site) as well as cost constraints.
Adaptive interface for personalizing information seeking.
Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond
2004-12-01
An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.
Observing proposals on the Web at the National Optical Astronomy Observatories
NASA Astrophysics Data System (ADS)
Pilachowski, Catherine A.; Barnes, Jeannette; Bell, David J.
1998-07-01
Proposals for telescope time at facilities available through the National Optical Astronomy Observatories can now be prepared and submitted via the WWW. Investigators submit proposal information through a series of HTML forms to the NOAO server, where the information is processed by Perl CGI scripts. PostScript figures and ASCII files may be attached by investigators for inclusion in their proposals using their browser's upload feature. Proposal information is saved on the server so that investigators can return in later sessions to continue work on a proposal and so that collaborators can participate in writing the proposal if they have access to the proposal account name and password. The system provides on-line verification of LATEX syntax and a spellchecker, and confirms that all sections of the proposal are filled out. Users can request a LATEX or PostScript copy of their proposal by e-mail, or view the proposal on line. The advantages of the Web-based process for our users are convenience, access to on-line documentation, and the simple interface which avoids direct confrontation with LATEX. From the NOAO point of view, the advantage is the use of standardized formats and syntax, particularly as we begin to receive proposals for the Gemini telescopes and some independent observatories.
A mobile phone user interface for image-based dietary assessment
NASA Astrophysics Data System (ADS)
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.
2014-02-01
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1989-01-01
The Transportable Applications Environment Plus (TAE Plus), developed by NASA's Goddard Space Flight Center, is a portable User Interface Management System (UIMS), which provides an intuitive WYSIWYG WorkBench for prototyping and designing an application's user interface, integrated with tools for efficiently implementing the designed user interface and effective management of the user interface during an application's active domain. During the development of TAE Plus, many design and implementation decisions were based on the state-of-the-art within graphics workstations, windowing system and object-oriented programming languages. Some of the problems and issues experienced during implementation are discussed. A description of the next development steps planned for TAE Plus is also given.
A Mobile Phone User Interface for Image-Based Dietary Assessment
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.
2016-01-01
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use. PMID:28572696
A Mobile Phone User Interface for Image-Based Dietary Assessment.
Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A; Boushey, Carol J; Delp, Edward J
2014-02-02
Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.
The Rise of the Graphical User Interface.
ERIC Educational Resources Information Center
Edwards, Alastair D. N.
1996-01-01
Discusses the history of the graphical user interface (GUI) and the growing realization that adaptations must be made to it lest its visual nature discriminate against nonsighted or sight-impaired users. One of the most popular commercially developed adaptations is to develop sounds that signal the location of icons or menus to mouse users.…
Graphical User Interfaces and Library Systems: End-User Reactions.
ERIC Educational Resources Information Center
Zorn, Margaret; Marshall, Lucy
1995-01-01
Describes a study by Parke-Davis Pharmaceutical Research Library to determine user satisfaction with the graphical user interface-based (GUI) Dynix Marquis compared with the text-based Dynix Classic Online Public Access Catalog (OPAC). Results show that the GUI-based OPAC was preferred by endusers over the text-based OPAC. (eight references) (DGM)
Effective Levels of Adaptation to Different Types of Users in Interactive Museum Systems.
ERIC Educational Resources Information Center
Paterno, F.; Mancini, C.
2000-01-01
Discusses user interaction with museum application interfaces and emphasizes the importance of adaptable and adaptive interfaces to meet differing user needs. Considers levels of support that can be given to different users during navigation of museum hypermedia information, using examples from the Web site for the Marble Museum (Italy).…
Personalization of XML Content Browsing Based on User Preferences
ERIC Educational Resources Information Center
Encelle, Benoit; Baptiste-Jessel, Nadine; Sedes, Florence
2009-01-01
Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. In this direction, we introduce concepts that result in the generation of personalized multimodal user interfaces for browsing XML content. User requirements concerning the browsing of a specific content type can be specified by means of…
Space Segment (SS) and the Navigation User Segment (US) Interface Control Document (ICD)
DOT National Transportation Integrated Search
1993-10-10
This Interface Control Document (ICD) defines the requirements related to the interface between the Space Segment (SS) of the Global Positioning System (GPS) and the Navigation Users Segment of the GPS. 2880k, 154p.
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
Experimental setup for evaluating an adaptive user interface for teleoperation control
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.
2017-05-01
A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.
User productivity as a function of AutoCAD interface design.
Mitta, D A; Flores, P L
1995-12-01
Increased operator productivity is a desired outcome of user-CAD interaction scenarios. Two objectives of this research were to (1) define a measure of operator productivity and (2) empirically investigate the potential effects of CAD interface design on operator productivity, where productivity is defined as the percentage of a drawing session correctly completed per unit time. Here, AutoCAD provides the CAD environment of interest. Productivity with respect to two AutoCAD interface designs (menu, template) and three task types (draw, dimension, display) was investigated. Analysis of user productivity data revealed significantly higher productivity under the menu interface condition than under the template interface condition. A significant effect of task type was also discovered, where user productivity under display tasks was higher than productivity under the draw and dimension tasks. Implications of these results are presented.
Flight Telerobotic Servicer prototype simulator
NASA Astrophysics Data System (ADS)
Schein, Rob; Krauze, Linda; Hartley, Craig; Dickenson, Alan; Lavecchia, Tom; Working, Bob
A prototype simulator for the Flight Telerobotic Servicer (FTS) system is described for use in the design development of the FTS, emphasizing the hand controller and user interface. The simulator utilizes a graphics workstation based on rapid prototyping tools for systems analyses of the use of the user interface and the hand controller. Kinematic modeling, manipulator-control algorithms, and communications programs are contained in the software for the simulator. The hardwired FTS panels and operator interface for use on the STS Orbiter are represented graphically, and the simulated controls function as the final FTS system configuration does. The robotic arm moves based on the user hand-controller interface, and the joint angles and other data are given on the prototype of the user interface. This graphics simulation tool provides the means for familiarizing crewmembers with the FTS system operation, displays, and controls.
Automating testbed documentation and database access using World Wide Web (WWW) tools
NASA Technical Reports Server (NTRS)
Ames, Charles; Auernheimer, Brent; Lee, Young H.
1994-01-01
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
Real-Time Distributed Algorithms for Visual and Battlefield Reasoning
2006-08-01
High-Level Task Definition Language, Graphical User Interface (GUI), Story Analysis, Story Interpretation, SensIT Nodes 16. SECURITY...or more actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such...actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such task
Exploring the simulation requirements for virtual regional anesthesia training
NASA Astrophysics Data System (ADS)
Charissis, V.; Zimmer, C. R.; Sakellariou, S.; Chan, W.
2010-01-01
This paper presents an investigation towards the simulation requirements for virtual regional anaesthesia training. To this end we have developed a prototype human-computer interface designed to facilitate Virtual Reality (VR) augmenting educational tactics for regional anaesthesia training. The proposed interface system, aims to compliment nerve blocking techniques methods. The system is designed to operate in real-time 3D environment presenting anatomical information and enabling the user to explore the spatial relation of different human parts without any physical constrains. Furthermore the proposed system aims to assist the trainee anaesthetists so as to build a mental, three-dimensional map of the anatomical elements and their depictive relationship to the Ultra-Sound imaging which is used for navigation of the anaesthetic needle. Opting for a sophisticated approach of interaction, the interface elements are based on simplified visual representation of real objects, and can be operated through haptic devices and surround auditory cues. This paper discusses the challenges involved in the HCI design, introduces the visual components of the interface and presents a tentative plan of future work which involves the development of realistic haptic feedback and various regional anaesthesia training scenarios.
User interface and patient involvement.
Andreassen, Hege Kristin; Lundvoll Nilsen, Line
2013-01-01
Increased patient involvement is a goal in contemporary health care, and of importance to the development of patient oriented ICT. In this paper we discuss how the design of patient-user interfaces can affect patient involvement. Our discussion is based on 12 semi-structured interviews with patient users of a web-based solution for patient--doctor communication piloted in Norway. We argue ICT solutions offering a choice of user interfaces on the patient side are preferable to ensure individual accommodation and a high degree of patient involvement. When introducing web-based tools for patient--health professional communication a free-text option should be provided to the patient users.
A Collaborative Brain-Computer Interface for Improving Human Performance
Wang, Yijun; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG) based brain-computer interfaces (BCI) have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1) Event-related potentials (ERP) averaging, (2) Feature concatenating, and (3) Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right) was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100–250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC), which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior. PMID:21655253
Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando
2008-01-01
This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.
Automatic system for ionization chamber current measurements.
Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F
2004-12-01
The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.
Yuan, Michael Juntao; Finley, George Mike; Mills, Christy; Johnson, Ron Kim
2013-01-01
Background Clinical decision support systems (CDSS) are important tools to improve health care outcomes and reduce preventable medical adverse events. However, the effectiveness and success of CDSS depend on their implementation context and usability in complex health care settings. As a result, usability design and validation, especially in real world clinical settings, are crucial aspects of successful CDSS implementations. Objective Our objective was to develop a novel CDSS to help frontline nurses better manage critical symptom changes in hospitalized patients, hence reducing preventable failure to rescue cases. A robust user interface and implementation strategy that fit into existing workflows was key for the success of the CDSS. Methods Guided by a formal usability evaluation framework, UFuRT (user, function, representation, and task analysis), we developed a high-level specification of the product that captures key usability requirements and is flexible to implement. We interviewed users of the proposed CDSS to identify requirements, listed functions, and operations the system must perform. We then designed visual and workflow representations of the product to perform the operations. The user interface and workflow design were evaluated via heuristic and end user performance evaluation. The heuristic evaluation was done after the first prototype, and its results were incorporated into the product before the end user evaluation was conducted. First, we recruited 4 evaluators with strong domain expertise to study the initial prototype. Heuristic violations were coded and rated for severity. Second, after development of the system, we assembled a panel of nurses, consisting of 3 licensed vocational nurses and 7 registered nurses, to evaluate the user interface and workflow via simulated use cases. We recorded whether each session was successfully completed and its completion time. Each nurse was asked to use the National Aeronautics and Space Administration (NASA) Task Load Index to self-evaluate the amount of cognitive and physical burden associated with using the device. Results A total of 83 heuristic violations were identified in the studies. The distribution of the heuristic violations and their average severity are reported. The nurse evaluators successfully completed all 30 sessions of the performance evaluations. All nurses were able to use the device after a single training session. On average, the nurses took 111 seconds (SD 30 seconds) to complete the simulated task. The NASA Task Load Index results indicated that the work overhead on the nurses was low. In fact, most of the burden measures were consistent with zero. The only potentially significant burden was temporal demand, which was consistent with the primary use case of the tool. Conclusions The evaluation has shown that our design was functional and met the requirements demanded by the nurses’ tight schedules and heavy workloads. The user interface embedded in the tool provided compelling utility to the nurse with minimal distraction. PMID:23612350
Comparison of tongue interface with keyboard for control of an assistive robotic arm.
Struijk, Lotte N S Andreasen; Lontis, Romulus
2017-07-01
This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.
Use of Design Patterns According to Hand Dominance in a Mobile User Interface
ERIC Educational Resources Information Center
Al-Samarraie, Hosam; Ahmad, Yusof
2016-01-01
User interface (UI) design patterns for mobile applications provide a solution to design problems and can improve the usage experience for users. However, there is a lack of research categorizing the uses of design patterns according to users' hand dominance in a learning-based mobile UI. We classified the main design patterns for mobile…
Hypertext-based design of a user interface for scheduling
NASA Technical Reports Server (NTRS)
Woerner, Irene W.; Biefeld, Eric
1993-01-01
Operations Mission Planner (OMP) is an ongoing research project at JPL that utilizes AI techniques to create an intelligent, automated planning and scheduling system. The information space reflects the complexity and diversity of tasks necessary in most real-world scheduling problems. Thus the problem of the user interface is to present as much information as possible at a given moment and allow the user to quickly navigate through the various types of displays. This paper describes a design which applies the hypertext model to solve these user interface problems. The general paradigm is to provide maps and search queries to allow the user to quickly find an interesting conflict or problem, and then allow the user to navigate through the displays in a hypertext fashion.
CERESVis: A QC Tool for CERES that Leverages Browser Technology for Data Validation
NASA Astrophysics Data System (ADS)
Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Doelling, D.
2015-12-01
In this poster, we are going to present three user interfaces that CERES team uses to validate pixel-level data. Besides our home grown tools, we will aslo present the browser technology that we use to provide interactive interfaces, such as jquery, HighCharts and Google Earth. We pass data to the users' browsers and use the browsers to do some simple computations. The three user interfaces are: Thumbnails -- it displays hundrends images to allow users to browse 24-hour data files in few seconds. Multiple-synchronized cursors -- it allows users to compare multiple images side by side. Bounding Boxes and Histograms -- it allows users to draw multiple bounding boxes on an image and the browser computes/display the histograms.
Innovation & evaluation of tangible direct manipulation digital drawing pens for children.
Lee, Tai-Hua; Wu, Fong-Gong; Chen, Huei-Tsz
2017-04-01
Focusing on the theme of direct manipulation, in this study, we proposed a new and innovative tangible user interface (TUI) design concept for a manipulative digital drawing pen. Based on interviews with focus groups brainstorming and experts and the results of a field survey, we selected the most suitable tangible user interface for children between 4 and 7 years of age. Using the new tangible user interface, children could choose between the brush tools after touching and feeling the various patterns. The thickness of the brush could be adjusted by changing the tilt angle. In a subsequent experimental process we compared the differences in performance and subjective user satisfaction. A total of sixteen children, aged 4-7 years participated in the experiment. Two operating system experiments (the new designed tangible digital drawing pen and traditional visual interface-icon-clicking digital drawing pens) were performed at random and in turns. We assessed their manipulation performance, accuracy, brush stroke richness and subjective evaluations. During the experimental process we found that operating functions using the direct manipulation method, and adding shapes and semantic models to explain the purpose of each function, enabled the children to perform stroke switches relatively smoothly. By using direct manipulation digital pens, the children could improve their stroke-switching performance for digital drawing. Additionally, by using various patterns to represent different brushes or tools, the children were able to make selections using their sense of touch, thereby reducing the time required to move along the drawing pens and select icons (The significant differences (p = 0.000, p < 0.01) existed in the manipulation times for drawing thick lines using the crayon function of the two (new and old) drawing pens (new 5.8750 < old 10.7500)). The addition of direct manipulation movements to drawing operations enhanced the drawing results, thereby increasing the children's enjoyment of drawing with tangible digital drawing pens. Copyright © 2016 Elsevier Ltd. All rights reserved.
Student Preferences toward Microcomputer User Interfaces.
ERIC Educational Resources Information Center
Hazari, Sunil I.; Reaves, Rita R.
1994-01-01
Describes a study of undergraduates that was conducted to determine students' preferences toward Graphical User Interface versus Command Line Interface during computer-assisted instruction. Previous experience, comfort level, performance scores, and student attitudes are examined and compared, and the computer use survey is appended. (Contains 13…
Querying Event Sequences by Exact Match or Similarity Search: Design and Empirical Evaluation
Wongsuphasawat, Krist; Plaisant, Catherine; Taieb-Maimon, Meirav; Shneiderman, Ben
2012-01-01
Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Most graphical user interfaces for database search use an exact match approach, which is often effective, but near misses may also be of interest. We describe a new similarity search interface, in which users specify a query by simply placing events on a blank timeline and retrieve a similarity-ranked list of results. Behind this user interface is a new similarity measure for event sequences which the users can customize by four decision criteria, enabling them to adjust the impact of missing, extra, or swapped events or the impact of time shifts. We describe a use case with Electronic Health Records based on our ongoing collaboration with hospital physicians. A controlled experiment with 18 participants compared exact match and similarity search interfaces. We report on the advantages and disadvantages of each interface and suggest a hybrid interface combining the best of both. PMID:22379286
Herasevich, Vitaly
2017-01-01
Background The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. Objective The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. Methods First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. Results The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Conclusions Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians’ needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as “apps.” A user-centered design process and usability evaluation should be considered during creation of these tools. PMID:28526675
Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly
2017-05-18
The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.
A flexible telerobotic system for space operations
NASA Technical Reports Server (NTRS)
Sliwa, N. O.; Will, R. W.
1987-01-01
The objective and design of a proposed goal-oriented knowledge-based telerobotic system for space operations is described. This design effort encompasses the elements of the system executive and user interface and the distribution and general structure of the knowledge base, the displays, and the task sequencing. The objective of the design effort is to provide an expandable structure for a telerobotic system that provides cooperative interaction between the human operator and computer control. The initial phase of the implementation provides a rule-based, goal-oriented script generator to interface to the existing control modes of a telerobotic research system, in the Intelligent Systems Research Lab at NASA Research Center.
Audio-Visual Situational Awareness for General Aviation Pilots
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)
2001-01-01
Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330
C-mii: a tool for plant miRNA and target identification.
Numnark, Somrak; Mhuantong, Wuttichai; Ingsriswang, Supawadee; Wichadakul, Duangdao
2012-01-01
MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms.
C-mii: a tool for plant miRNA and target identification
2012-01-01
Background MicroRNAs (miRNAs) have been known to play an important role in several biological processes in both animals and plants. Although several tools for miRNA and target identification are available, the number of tools tailored towards plants is limited, and those that are available have specific functionality, lack graphical user interfaces, and restrict the number of input sequences. Large-scale computational identifications of miRNAs and/or targets of several plants have been also reported. Their methods, however, are only described as flow diagrams, which require programming skills and the understanding of input and output of the connected programs to reproduce. Results To overcome these limitations and programming complexities, we proposed C-mii as a ready-made software package for both plant miRNA and target identification. C-mii was designed and implemented based on established computational steps and criteria derived from previous literature with the following distinguishing features. First, software is easy to install with all-in-one programs and packaged databases. Second, it comes with graphical user interfaces (GUIs) for ease of use. Users can identify plant miRNAs and targets via step-by-step execution, explore the detailed results from each step, filter the results according to proposed constraints in plant miRNA and target biogenesis, and export sequences and structures of interest. Third, it supplies bird's eye views of the identification results with infographics and grouping information. Fourth, in terms of functionality, it extends the standard computational steps of miRNA target identification with miRNA-target folding and GO annotation. Fifth, it provides helper functions for the update of pre-installed databases and automatic recovery. Finally, it supports multi-project and multi-thread management. Conclusions C-mii constitutes the first complete software package with graphical user interfaces enabling computational identification of both plant miRNA genes and miRNA targets. With the provided functionalities, it can help accelerate the study of plant miRNAs and targets, especially for small and medium plant molecular labs without bioinformaticians. C-mii is freely available at http://www.biotec.or.th/isl/c-mii for both Windows and Ubuntu Linux platforms. PMID:23281648
User acquaintance with mobile interfaces.
Ehrler, Frederic; Walesa, Magali; Sarrey, Evelyne; Wipfli, Rolf; Lovis, Christian
2013-01-01
Handheld technology finds slowly its place in the healthcare world. Some clinicians already use intensively dedicated mobile applications to consult clinical references. However, handheld technology hasn't still broadly embraced to the core of the healthcare business, the hospitals. The weak penetration of handheld technology in the hospitals can be partly explained by the caution of stakeholders that must be convinced about the efficiency of these tools before going forward. In a domain where temporal constraints are increasingly strong, caregivers cannot loose time on playing with gadgets. All users are not comfortable with tactile manipulations and the lack of dedicated peripheral complicates entering data for novices. Stakeholders must be convinced that caregivers will be able to master handheld devices. In this paper, we make the assumption that the proper design of an interface may influence users' performances to record information. We are also interested to find out whether users increase their efficiency when using handheld tools repeatedly. To answer these questions, we have set up a field study to compare users' performances on three different user interfaces while recording vital signs. Some user interfaces were familiar to users, and others were totally innovative. Results showed that users' familiarity with smartphone influences their performances and that users improve their performances by repeating a task.
TreePlus: interactive exploration of networks with enhanced tree layouts.
Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher
2006-01-01
Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus.
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.
Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei
2017-12-01
Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.
INL Multi-Robot Control Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robotâs condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.
Putting Home Data Management into Perspective
2009-12-01
approaches. However, users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently...users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently-opened Word documents...live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop- down lists of recently-opened Word documents) allow users to navigate file
ERIC Educational Resources Information Center
Kerawalla, Lucinda; Pearce, Darren; Yuill, Nicola; Luckin, Rosemary; Harris, Amanda
2008-01-01
We take a socio-cultural approach to comparing how dual control of a new user interface paradigm--Separate Control of Shared Space (SCOSS)--and dual control of a single user interface can work to mediate the collaborative decision-making process between pairs of children carrying out a multiple categorisation word task on a shared computer.…
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1991-01-01
The Transportable Applications Environment (TAE) Plus, developed at NASA's Goddard Space Flight Center, is an advanced portable user interface development environment which simplifies the process of creating and managing complex application graphical user interfaces (GUIs), supports prototyping, allows applications to be oported easily between different platforms, and encourages appropriate levels of user interface consistency between applications. This paper discusses the capabilities of the TAE Plus tool, and how it makes the job of designing and developing GUIs easier for the application developers. The paper also explains how tools like TAE Plus provide for reusability and ensure reliability of UI software components, as well as how they aid in the reduction of development and maintenance costs.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Interfaces for End-User Information Seeking.
ERIC Educational Resources Information Center
Marchionini, Gary
1992-01-01
Discusses essential features of interfaces to support end-user information seeking. Highlights include cognitive engineering; task models and task analysis; the problem-solving nature of information seeking; examples of systems for end-users, including online public access catalogs (OPACs), hypertext, and help systems; and suggested research…
Ye, Nong; Li, Xiangyang; Farley, Toni
2003-01-15
Hand signs are considered as one of the important ways to enter information into computers for certain tasks. Computers receive sensor data of hand signs for recognition. When using hand signs as computer inputs, we need to (1) train computer users in the sign language so that their hand signs can be easily recognized by computers, and (2) design the computer interface to avoid the use of confusing signs for improving user input performance and user satisfaction. For user training and computer interface design, it is important to have a knowledge of which signs can be easily recognized by computers and which signs are not distinguishable by computers. This paper presents a data mining technique to discover distinct patterns of hand signs from sensor data. Based on these patterns, we derive a group of indistinguishable signs by computers. Such information can in turn assist in user training and computer interface design.
A novel graphical user interface for ultrasound-guided shoulder arthroscopic surgery
NASA Astrophysics Data System (ADS)
Tyryshkin, K.; Mousavi, P.; Beek, M.; Pichora, D.; Abolmaesumi, P.
2007-03-01
This paper presents a novel graphical user interface developed for a navigation system for ultrasound-guided computer-assisted shoulder arthroscopic surgery. The envisioned purpose of the interface is to assist the surgeon in determining the position and orientation of the arthroscopic camera and other surgical tools within the anatomy of the patient. The user interface features real time position tracking of the arthroscopic instruments with an optical tracking system, and visualization of their graphical representations relative to a three-dimensional shoulder surface model of the patient, created from computed tomography images. In addition, the developed graphical interface facilitates fast and user-friendly intra-operative calibration of the arthroscope and the arthroscopic burr, capture and segmentation of ultrasound images, and intra-operative registration. A pilot study simulating the computer-aided shoulder arthroscopic procedure on a shoulder phantom demonstrated the speed, efficiency and ease-of-use of the system.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G [Albuquerque, NM
2011-03-29
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G.
2004-04-20
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo
2018-03-15
RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId's core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goal here. We have constructed a graphical user interface to facilitate the use of RAId on users' local machines. Written in Java, RAId_GUI not only makes easy executions of RAId but also provides tools for data/spectra visualization, MS-product analysis, molecular isotopic distribution analysis, and graphing the retrieval versus the proportion of false discoveries. The results viewer displays and allows the users to download the analyses results. Both the knowledge-integrated organismal databases and the code package (containing source code, the graphical user interface, and a user manual) are available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads/raid.html .
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.
Dufendach, Kevin R; Koch, Sabine; Unertl, Kim M; Lehmann, Christoph U
2017-10-26
Early involvement of stakeholders in the design of medical software is particularly important due to the need to incorporate complex knowledge and actions associated with clinical work. Standard user-centered design methods include focus groups and participatory design sessions with individual stakeholders, which generally limit user involvement to a small number of individuals due to the significant time investments from designers and end users. The goal of this project was to reduce the effort for end users to participate in co-design of a software user interface by developing an interactive web-based crowdsourcing platform. In a randomized trial, we compared a new web-based crowdsourcing platform to standard participatory design sessions. We developed an interactive, modular platform that allows responsive remote customization and design feedback on a visual user interface based on user preferences. The responsive canvas is a dynamic HTML template that responds in real time to user preference selections. Upon completion, the design team can view the user's interface creations through an administrator portal and download the structured selections through a REDCap interface. We have created a software platform that allows users to customize a user interface and see the results of that customization in real time, receiving immediate feedback on the impact of their design choices. Neonatal clinicians used the new platform to successfully design and customize a neonatal handoff tool. They received no specific instruction and yet were able to use the software easily and reported high usability. VandAID, a new web-based crowdsourcing platform, can involve multiple users in user-centered design simultaneously and provides means of obtaining design feedback remotely. The software can provide design feedback at any stage in the design process, but it will be of greatest utility for specifying user requirements and evaluating iterative designs with multiple options.
Defining and quantifying users' mental Imagery-based BCI skills: a first step.
Lotte, Fabien; Jeunet, Camille
2018-05-17
While promising for many applications, Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are still scarcely used outside laboratories, due to a poor reliability. It is thus necessary to study and fix this reliability issue. Doing so requires the use of appropriate reliability metrics to quantify both the classification algorithm and the BCI user's performances. So far, Classification Accuracy (CA) is the typical metric used for both aspects. However, we argue in this paper that CA is a poor metric to study BCI users' skills. Here, we propose a definition and new metrics to quantify such BCI skills for Mental Imagery (MI) BCIs, independently of any classification algorithm. Approach: We first show in this paper that CA is notably unspecific, discrete, training data and classifier dependent, and as such may not always reflect successful self-modulation of EEG patterns by the user. We then propose a definition of MI-BCI skills that reflects how well the user can self-modulate EEG patterns, and thus how well he could control an MI-BCI. Finally, we propose new performance metrics, classDis, restDist and classStab that specifically measure how distinct and stable the EEG patterns produced by the user are, independently of any classifier. Main results: By re-analyzing EEG data sets with such new metrics, we indeed confirmed that CA may hide some increase in MI-BCI skills or hide the user inability to self-modulate a given EEG pattern. On the other hand, our new metrics could reveal such skill improvements as well as identify when a mental task performed by a user was no different than rest EEG. Significance: Our results showed that when studying MI-BCI users' skills, CA should be used with care, and complemented with metrics such as the new ones proposed. Our results also stressed the need to redefine BCI user training by considering the different BCI subskills and their measures. To promote the complementary use of our new metrics, we provide the Matlab code to compute them for free and open-source. © 2018 IOP Publishing Ltd.
NASA Technical Reports Server (NTRS)
Bishop, Peter C.; Erickson, Lloyd
1990-01-01
The Management Information and Decision Support Environment (MIDSE) is a research activity to build and test a prototype of a generic human interface on the Johnson Space Center (JSC) Information Network (CIN). The existing interfaces were developed specifically to support operations rather than the type of data which management could use. The diversity of the many interfaces and their relative difficulty discouraged occasional users from attempting to use them for their purposes. The MIDSE activity approached this problem by designing and building an interface to one JSC data base - the personnel statistics tables of the NASA Personnel and Payroll System (NPPS). The interface was designed against the following requirements: generic (use with any relational NOMAD data base); easy to learn (intuitive operations for new users); easy to use (efficient operations for experienced users); self-documenting (help facility which informs users about the data base structure as well as the operation of the interface); and low maintenance (easy configuration to new applications). A prototype interface entitled the JSC Management Information Systems (JSCMIS) was produced. It resides on CIN/PROFS and is available to JSC management who request it. The interface has passed management review and is ready for early use. Three kinds of data are now available: personnel statistics, personnel register, and plan/actual cost.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2014-04-29
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2011-02-01
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2012-03-20
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
WIFIP: a web-based user interface for automated synchrotron beamlines.
Sallaz-Damaz, Yoann; Ferrer, Jean Luc
2017-09-01
The beamline control software, through the associated graphical user interface (GUI), is the user access point to the experiment, interacting with synchrotron beamline components and providing automated routines. FIP, the French beamline for the Investigation of Proteins, is a highly automatized macromolecular crystallography (MX) beamline at the European Synchrotron Radiation Facility. On such a beamline, a significant number of users choose to control their experiment remotely. This is often performed with a limited bandwidth and from a large choice of computers and operating systems. Furthermore, this has to be possible in a rapidly evolving experimental environment, where new developments have to be easily integrated. To face these challenges, a light, platform-independent, control software and associated GUI are required. Here, WIFIP, a web-based user interface developed at FIP, is described. Further than being the present FIP control interface, WIFIP is also a proof of concept for future MX control software.
Demiris, A M; Meinzer, H P
1997-01-01
Whether or not a computerized system enhances the conditions of work in the application domain, very much demands on the user interface. Graphical user interfaces seem to attract the interest of the users but mostly ignore some basic rules of visual information processing thus leading to systems which are difficult to use, lowering productivity and increasing working stress (cognitive and work load). In this work we present some fundamental ergonomic considerations and their application to the medical image processing and archiving domain. We introduce the extensions to an existing concept needed to control and guide the development of GUIs with respect to domain specific ergonomics. The suggested concept, called Model-View-Controller Constraints (MVCC), can be used to programmatically implement ergonomic constraints, and thus has some advantages over written style guides. We conclude with the presentation of existing norms and methods to evaluate user interfaces.
Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.
NASA Technical Reports Server (NTRS)
Reil, Robin
2011-01-01
The success of JPL's Next Generation Imaging Spectrometer (NGIS) in Earth remote sensing has inspired a follow-on instrument project, the MaRSPlus Sensor System (MSS). One of JPL's responsibilities in the MSS project involves updating the documentation from the previous JPL airborne imagers to provide all the information necessary for an outside customer to operate the instrument independently. As part of this documentation update, I created detailed electrical cabling diagrams to provide JPL technicians with clear and concise build instructions and a database to track the status of cables from order to build to delivery. Simultaneously, a distributed motor control system is being developed for potential use on the proposed 2018 Mars rover mission. This system would significantly reduce the mass necessary for rover motor control, making more mass space available to other important spacecraft systems. The current stage of the project consists of a desktop computer talking to a single "cold box" unit containing the electronics to drive a motor. In order to test the electronics, I developed a graphical user interface (GUI) using MATLAB to allow a user to send simple commands to the cold box and display the responses received in a user-friendly format.
Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776
User-Adapted Recommendation of Content on Mobile Devices Using Bayesian Networks
NASA Astrophysics Data System (ADS)
Iwasaki, Hirotoshi; Mizuno, Nobuhiro; Hara, Kousuke; Motomura, Yoichi
Mobile devices, such as cellular phones and car navigation systems, are essential to daily life. People acquire necessary information and preferred content over communication networks anywhere, anytime. However, usability issues arise from the simplicity of user interfaces themselves. Thus, a recommendation of content that is adapted to a user's preference and situation will help the user select content. In this paper, we describe a method to realize such a system using Bayesian networks. This user-adapted mobile system is based on a user model that provides recommendation of content (i.e., restaurants, shops, and music that are suitable to the user and situation) and that learns incrementally based on accumulated usage history data. However, sufficient samples are not always guaranteed, since a user model would require combined dependency among users, situations, and contents. Therefore, we propose the LK method for modeling, which complements incomplete and insufficient samples using knowledge data, and CPT incremental learning for adaptation based on a small number of samples. In order to evaluate the methods proposed, we applied them to restaurant recommendations made on car navigation systems. The evaluation results confirmed that our model based on the LK method can be expected to provide better generalization performance than that of the conventional method. Furthermore, our system would require much less operation than current car navigation systems from the beginning of use. Our evaluation results also indicate that learning a user's individual preference through CPT incremental learning would be beneficial to many users, even with only a few samples. As a result, we have developed the technology of a system that becomes more adapted to a user the more it is used.
NASA Technical Reports Server (NTRS)
Oishi, Meeko; Tomlin, Claire; Degani, Asaf
2003-01-01
Human interaction with complex hybrid systems involves the user, the automation's discrete mode logic, and the underlying continuous dynamics of the physical system. Often the user-interface of such systems displays a reduced set of information about the entire system. In safety-critical systems, how can we identify user-interface designs which do not have adequate information, or which may confuse the user? Here we describe a methodology, based on hybrid system analysis, to verify that a user-interface contains information necessary to safely complete a desired procedure or task. Verification within a hybrid framework allows us to account for the continuous dynamics underlying the simple, discrete representations displayed to the user. We provide two examples: a car traveling through a yellow light at an intersection and an aircraft autopilot in a landing/go-around maneuver. The examples demonstrate the general nature of this methodology, which is applicable to hybrid systems (not fully automated) which have operational constraints we can pose in terms of safety. This methodology differs from existing work in hybrid system verification in that we directly account for the user's interactions with the system.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Wong, Edmond; Simon, Donald L.
1994-01-01
A prototype Lisp-based soft real-time object-oriented Graphical User Interface for control system development is presented. The Graphical User Interface executes alongside a test system in laboratory conditions to permit observation of the closed loop operation through animation, graphics, and text. Since it must perform interactive graphics while updating the screen in real time, techniques are discussed which allow quick, efficient data processing and animation. Examples from an implementation are included to demonstrate some typical functionalities which allow the user to follow the control system's operation.
Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.
Eom, Hwisoo; Lee, Sang Hun
2015-06-12
A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.
Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles
Eom, Hwisoo; Lee, Sang Hun
2015-01-01
A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model. PMID:26076406
Kim, Eun Yi
2017-01-01
A significant challenge faced by visually impaired people is ‘wayfinding’, which is the ability to find one’s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques. It estimates the motions of the user with two inertial sensors and records the trajectories of the user toward the destination, which are also used as a guide for the return route after reaching the destination. To efficiently convey the recognized results using an auditory interface, activity-based instructions are generated that guide the user in a series of movements along a route. To assess the effectiveness of the proposed system, experiments were conducted in several indoor environments: the sit in which the situation awareness accuracy was 90% and the object detection false alarm rate was 0.016. In addition, our field test results demonstrate that users can locate their paths with an accuracy of 97%. PMID:28813033
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with
Modeling Goal-Directed User Exploration in Human-Computer Interaction
2011-02-01
scent, other factors including the layout position and grouping of options in the user-interface also affect user exploration and the likelihood of...grouping of options in the user-interface also affect user exploration and the likelihood of success. This dissertation contributes a new model of goal...better inform UI design. 1.1 RESEARCH GAPS IN MODELING In addition to infoscent, the layout of the UI also affects the choices made during
WASP: a Web-based Allele-Specific PCR assay designing tool for detecting SNPs and mutations
Wangkumhang, Pongsakorn; Chaichoompu, Kridsadakorn; Ngamphiw, Chumpol; Ruangrit, Uttapong; Chanprasert, Juntima; Assawamakin, Anunchai; Tongsima, Sissades
2007-01-01
Background Allele-specific (AS) Polymerase Chain Reaction is a convenient and inexpensive method for genotyping Single Nucleotide Polymorphisms (SNPs) and mutations. It is applied in many recent studies including population genetics, molecular genetics and pharmacogenomics. Using known AS primer design tools to create primers leads to cumbersome process to inexperience users since information about SNP/mutation must be acquired from public databases prior to the design. Furthermore, most of these tools do not offer the mismatch enhancement to designed primers. The available web applications do not provide user-friendly graphical input interface and intuitive visualization of their primer results. Results This work presents a web-based AS primer design application called WASP. This tool can efficiently design AS primers for human SNPs as well as mutations. To assist scientists with collecting necessary information about target polymorphisms, this tool provides a local SNP database containing over 10 million SNPs of various populations from public domain databases, namely NCBI dbSNP, HapMap and JSNP respectively. This database is tightly integrated with the tool so that users can perform the design for existing SNPs without going off the site. To guarantee specificity of AS primers, the proposed system incorporates a primer specificity enhancement technique widely used in experiment protocol. In particular, WASP makes use of different destabilizing effects by introducing one deliberate 'mismatch' at the penultimate (second to last of the 3'-end) base of AS primers to improve the resulting AS primers. Furthermore, WASP offers graphical user interface through scalable vector graphic (SVG) draw that allow users to select SNPs and graphically visualize designed primers and their conditions. Conclusion WASP offers a tool for designing AS primers for both SNPs and mutations. By integrating the database for known SNPs (using gene ID or rs number), this tool facilitates the awkward process of getting flanking sequences and other related information from public SNP databases. It takes into account the underlying destabilizing effect to ensure the effectiveness of designed primers. With user-friendly SVG interface, WASP intuitively presents resulting designed primers, which assist users to export or to make further adjustment to the design. This software can be freely accessed at . PMID:17697334
An intelligent subtitle detection model for locating television commercials.
Huang, Yo-Ping; Hsu, Liang-Wei; Sandnes, Frode-Eika
2007-04-01
A strategy for locating television (TV) commercials in TV programs is proposed. Based on the observation that most TV commercials do not have subtitles, the first stage exploits six subtitle constraints and an adaptive neurofuzzy inference system model to determine whether a frame contains a subtitle or not. The second stage involves locating the mark-in/mark-out points using a genetic algorithm. An interactive user interface allows users to efficiently identify and fine-tune the exact boundaries separating the commercials from the program content. Furthermore, erroneous boundaries are manually corrected. Experimental results show that the precision rate and recall rates exceed 90%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Onar, Omer C; Campbell, Steven L
Integrated charger topologies that have been researched so far are with the dc-dc converters and the charging functionality usually have no isolation in the system. Isolation is an important feature that is required for user interface systems that have grid connections and therefore is a major limitation that needs to be addressed along with the integrated functionality. This study features a unique way of combining the wired and wireless charging functionalities with vehicle side boost converter integration and maintaining the isolation to provide the best solution to the plug-in electric vehicle (PEV) users. The new performance of the proposed architecturemore » is presented for wired and wireless charging options at different power levels.« less
A Graphical Database Interface for Casual, Naive Users.
ERIC Educational Resources Information Center
Burgess, Clifford; Swigger, Kathleen
1986-01-01
Describes the design of a database interface for infrequent users of computers which consists of a graphical display of a model of a database and a natural language query language. This interface was designed for and tested with physicians at the University of Texas Health Science Center in Dallas. (LRW)
Physician acceptance of the IRIS user interface during a clinical trial at the Ottawa Civic Hospital
NASA Astrophysics Data System (ADS)
Coristine, Marjorie; Beeton, Carolyn; Tombaugh, Jo W.; Ahuja, J.; Belanger, Garry; Dillon, Richard F.; Currie, Shawn; Hind, E.
1990-07-01
During a clinical trial, emergency physicians and radiologists at the Ottawa Civic Hospital used IRIS (Integrated Radiological Information System) to process patients' x-rays, requisitions, and reports, and to have consultations, for 319 active cases. This paper discusses IRIS user interface issues raised during the clinical trial. The IRIS workstation consists of three major system components: 1) an image screen for viewing and enhancing images; 2) a control screen for presenting patient information, selecting images, and executing commands; and 3) a hands-free telephone for reporting activities and consultations. The control screen and hands-free telephone user interface allow physicians to navigate through patient files, select images and access reports, enter new reports, and perform remote consultations. Physicians were observed using the system during the trial and responded to questions about the user interface on an extensive debriefing interview after the trial. Overall, radiologists and emergency physicians were satisfied with IRIS control screen functionality and user interface. In a number of areas radiologists and emergency physicians differed in their user interface needs. Some features were found to be acceptable to one group of physicians but required modification to meet the needs of the other physician group. The data from the interviews, along with the comments from radiologists and emergency physicians provided important information for the revision of some features, and for the evolution of new features.
Allowing the Advantaged User in a Network Centric System to Get Through the Disadvantaged Interface
2009-09-01
ADVANTAGED USER IN A NETWORK CENTRIC SYSTEM TO GET THROUGH THE DISADVANTAGED INTERFACE by Lawrence Brandon September 2009 Thesis Advisor...Through the Disadvantaged Interface 6. AUTHOR(S) Lawrence Brandon 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...identify those factors that cause disadvantaged interfaces within network centric systems and provides recommendations to these challenges so that
HST archive primer, version 4.1
NASA Technical Reports Server (NTRS)
Fruchter, A. (Editor); Baum, S. (Editor)
1994-01-01
This version of the HST Archive Primer provides the basic information a user needs to know to access the HST archive via StarView the new user interface to the archive. Using StarView, users can search for observations interest, find calibration reference files, and retrieve data from the archive. Both the terminal version of StarView and the X-windows version feature a name resolver which simplifies searches of the HST archive based on target name. In addition, the X-windows version of StarView allows preview of all public HST data; compressed versions of public images are displayed via SAOIMAGE, while spectra are plotted using the public plotting package, XMGR. Finally, the version of StarView described here features screens designed for observers preparing Cycle 5 HST proposals.
The Device Centric Communication System for 5G Networks
NASA Astrophysics Data System (ADS)
Biswash, S. K.; Jayakody, D. N. K.
2017-01-01
The Fifth Generation Communication (5G) networks have several functional features such as: Massive Multiple Input and Multiple Output (MIMO), Device centric data and voice support, Smarter-device communications, etc. The objective for 5G networks is to gain the 1000x more throughput, 10x spectral efficiency, 100 x more energy efficiency than existing technologies. The 5G system will provide the balance between the Quality of Experience (QoE) and the Quality of Service (QoS), without compromising the user benefit. The data rate has been the key metric for wireless QoS; QoE deals with the delay and throughput. In order to realize a balance between the QoS and QoE, we propose a cellular Device centric communication methodology for the overlapping network coverage area in the 5G communication system. The multiple beacon signals mobile tower refers to an overlapping network area, and a user must be forwarded to the next location area. To resolve this issue, we suggest the user centric methodology (without Base Station interface) to handover the device in the next area, until the users finalize the communication. The proposed method will reduce the signalling cost and overheads for the communication.
Image processing and applications based on visualizing navigation service
NASA Astrophysics Data System (ADS)
Hwang, Chyi-Wen
2015-07-01
When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.
NASA Astrophysics Data System (ADS)
Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila
2017-08-01
In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.
Truong, Dennis Q; Hüber, Mathias; Xie, Xihe; Datta, Abhishek; Rahman, Asif; Parra, Lucas C; Dmochowski, Jacek P; Bikson, Marom
2014-01-01
Computational models of brain current flow during transcranial electrical stimulation (tES), including transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), are increasingly used to understand and optimize clinical trials. We propose that broad dissemination requires a simple graphical user interface (GUI) software that allows users to explore and design montages in real-time, based on their own clinical/experimental experience and objectives. We introduce two complimentary open-source platforms for this purpose: BONSAI and SPHERES. BONSAI is a web (cloud) based application (available at neuralengr.com/bonsai) that can be accessed through any flash-supported browser interface. SPHERES (available at neuralengr.com/spheres) is a stand-alone GUI application that allow consideration of arbitrary montages on a concentric sphere model by leveraging an analytical solution. These open-source tES modeling platforms are designed go be upgraded and enhanced. Trade-offs between open-access approaches that balance ease of access, speed, and flexibility are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting
NASA Astrophysics Data System (ADS)
Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.
2017-12-01
Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.
smRithm: Graphical user interface for heart rate variability analysis.
Nara, Sanjeev; Kaur, Manvinder; Datta, Saurav
2015-01-01
Over the past 25 years, Heart rate variability (HRV) has become a non-invasive research and clinical tool for indirectly carrying out investigation of both cardiac and autonomic system function in both healthy and diseased. It provides valuable information about a wide range of cardiovascular disorders, pulmonary diseases, neurological diseases, etc. Its primary purpose is to access the functioning of the nervous system. The source of information for HRV analysis is the continuous beat to beat measurement of inter-beat intervals. The electrocardiography (ECG or EKG) is considered as the best way to measure inter-beat intervals. This paper proposes an open source Graphical User Interface (GUI): smRithm developed in MATLAB for HRV analysis that will apply effective techniques on the raw ECG signals to process and decompose it in a simpler manner to obtain more useful information out of signals that can be utilized for more powerful and efficient applications in the near future related to HRV.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less
Liberati, Giulia; Dalboni da Rocha, Josué Luiz; van der Heiden, Linda; Raffone, Antonino; Birbaumer, Niels; Olivetti Belardinelli, Marta; Sitaram, Ranganatha
2012-01-01
Brain-computer interfaces (BCIs) provide alternative methods for communicating and acting on the world, since messages or commands are conveyed from the brain to an external device without using the normal output pathways of peripheral nerves and muscles. Alzheimer's disease (AD) patients in the most advanced stages, who have lost the ability to communicate verbally, could benefit from a BCI that may allow them to convey basic thoughts (e.g., "yes" and "no") and emotions. There is currently no report of such research, mostly because the cognitive deficits in AD patients pose serious limitations to the use of traditional BCIs, which are normally based on instrumental learning and require users to self-regulate their brain activation. Recent studies suggest that not only self-regulated brain signals, but also involuntary signals, for instance related to emotional states, may provide useful information about the user, opening up the path for so-called "affective BCIs". These interfaces do not necessarily require users to actively perform a cognitive task, and may therefore be used with patients who are cognitively challenged. In the present hypothesis paper, we propose a paradigm shift from instrumental learning to classical conditioning, with the aim of discriminating "yes" and "no" thoughts after associating them to positive and negative emotional stimuli respectively. This would represent a first step in the development of a BCI that could be used by AD patients, lending a new direction not only for communication, but also for rehabilitation and diagnosis.
SPIKY: a graphical user interface for monitoring spike train synchrony
Mulansky, Mario; Bozanic, Nebojsa
2015-01-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. PMID:25744888
SPIKY: a graphical user interface for monitoring spike train synchrony.
Kreuz, Thomas; Mulansky, Mario; Bozanic, Nebojsa
2015-05-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. Copyright © 2015 the American Physiological Society.
pROC: an open-source package for R and S+ to analyze and compare ROC curves.
Robin, Xavier; Turck, Natacha; Hainard, Alexandre; Tiberti, Natalia; Lisacek, Frédérique; Sanchez, Jean-Charles; Müller, Markus
2011-03-17
Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.
Strategic Help in User Interfaces for Information Retrieval.
ERIC Educational Resources Information Center
Brajnik, Giorgio; Mizzaro, Stefano; Tasso, Carlo; Venuti, Fabio
2002-01-01
Discussion of search strategy in information retrieval by end users focuses on the role played by strategic reasoning and design principles for user interfaces. Highlights include strategic help based on collaborative coaching; a conceptual model for strategic help; and a prototype knowledge-based system named FIRE. (Author/LRW)
Enabling Accessibility Through Model-Based User Interface Development.
Ziegler, Daniel; Peissner, Matthias
2017-01-01
Adaptive user interfaces (AUIs) can increase the accessibility of interactive systems. They provide personalized display and interaction modes to fit individual user needs. Most AUI approaches rely on model-based development, which is considered relatively demanding. This paper explores strategies to make model-based development more attractive for mainstream developers.
NASA Technical Reports Server (NTRS)
Aggrawal, Bharat
1994-01-01
This viewgraph presentation describes the development of user interfaces for OS/2 versions of computer codes for the analysis of seals. Current status, new features, work in progress, and future plans are discussed.
14 CFR § 1215.102 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., and the necessary TDRSS operational areas, interface devices, and NASA communication circuits that... interface. (c) Bit stream. The electronic signals acquired by TDRSS from the user craft or the user...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browne, S.V.; Green, S.C.; Moore, K.
1994-04-01
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less
Building a Trusted Path for Applications Using COTS Components
2004-11-01
against attacks by malicious software. Trojan horse programs, i.e., programs with additional hidden, often malicious, functions, are more and more...cannot be imitated by untrusted software." Wiseman et al. (1988) propose a user interface for the SMITE system to prevent Trojan horses from...input, two of which can also be used for the hologram service. 7.0 CONCLUSION Trojan horse programs, i.e., programs with additional hidden, often
Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia
1996-01-01
The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.
Sutiono, Agung Budi; Suwa, Hirohiko; Ohta, Toshizumi; Arifin, Muh Zafrullah; Kitamura, Yohei; Yoshida, Kazunari; Merdika, Daduk; Qiantori, Andri; Iskandar
2012-12-01
Disasters bring consequences of negative impacts on the environment and human life. One of the common cause of critical condition is traumatic brain injury (TBI), namely, epidural (EDH) and subdural hematoma (SDH), due to downfall hard things during earthquake. We proposed and analyzed the user response, namely neurosurgeon, general doctor/surgeon and nurse when they interacted with TBI computer interface. The communication systems was supported by TBI web based applications using emergency broadband access network with tethered balloon and simulated in the field trial to evaluate the coverage area. The interface consisted of demography data and multi tabs for anamnesis, treatment, follow up and teleconference interfaces. The interface allows neurosurgeon, surgeon/general doctors and nurses to entry the EDH and SDH patient's data during referring them on the emergency simulation and evaluated based on time needs and their understanding. The average time needed was obtained after simulated by Lenovo T500 notebook using mouse; 8-10 min for neurosurgeons, 12-15 min for surgeons/general doctors and 15-19 min for nurses. By using Think Pad X201 Tablet, the time needed for entry data was 5-7 min for neurosurgeon, 7-10 min for surgeons/general doctors and 12-16 min for nurses. We observed that the time difference was depending on the computer type and user literacy qualification as well as their understanding on traumatic brain injury, particularly for the nurses. In conclusion, there are five data classification for simply TBI GUI, namely, 1) demography, 2) specific anamnesis for EDH and SDH, 3) treatment action and medicine of TBI, 4) follow up data display and 5) teleneurosurgery for streaming video consultation. The type of computer, particularly tablet PC was more convenient and faster for entry data, compare to that computer mouse touched pad. Emergency broadband access network using tethered balloon is possible to be employed to cover the communications systems in disaster area.
Usability Evaluation Methods for Gesture-Based Games: A Systematic Review
Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti
2016-01-01
Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence. PMID:27702737
Natural user interface as a supplement of the holographic Raman tweezers
NASA Astrophysics Data System (ADS)
Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel
2014-09-01
Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.
Design and Implementation of a Set-Top Box–Based Homecare System Using Hybrid Cloud
Lin, Bor-Shing; Hsiao, Pei-Chi; Cheng, Po-Hsun; Jan, Gene Eu
2015-01-01
Abstract Introduction: Telemedicine has become a prevalent topic in recent years, and several telemedicine systems have been proposed; however, such systems are an unsuitable fit for the daily requirements of users. Materials and Methods: The system proposed in this study was developed as a set-top box integrated with the Android™ (Google, Mountain View, CA) operating system to provide a convenient and user-friendly interface. The proposed system can assist with family healthcare management, telemedicine service delivery, and information exchange among hospitals. To manage the system, a novel type of hybrid cloud architecture was also developed. Results: Updated information is stored on a public cloud, enabling medical staff members to rapidly access information when diagnosing patients. In the long term, the stored data can be reduced to improve the efficiency of the database. Conclusions: The proposed design offers a robust architecture for storing data in a homecare system and can thus resolve network overload and congestion resulting from accumulating data, which are inherent problems in centralized architectures, thereby improving system efficiency. PMID:26075333
Recommending personally interested contents by text mining, filtering, and interfaces
Xu, Songhua
2015-10-27
A personalized content recommendation system includes a client interface device configured to monitor a user's information data stream. A collaborative filter remote from the client interface device generates automated predictions about the interests of the user. A database server stores personal behavioral profiles and user's preferences based on a plurality of monitored past behaviors and an output of the collaborative user personal interest inference engine. A programmed personal content recommendation server filters items in an incoming information stream with the personal behavioral profile and identifies only those items of the incoming information stream that substantially matches the personal behavioral profile. The identified personally relevant content is then recommended to the user following some priority that may consider the similarity between the personal interest matches, the context of the user information consumption behaviors that may be shown by the user's content consumption mode.
NASA Technical Reports Server (NTRS)
Srivastava, Sadanand; deLamadrid, James
1998-01-01
The User System Interface Agent (USIA) is a special type of software agent which acts as the "middle man" between a human user and an information processing environment. USIA consists of a group of cooperating agents which are responsible for assisting users in obtaining information processing services intuitively and efficiently. Some of the main features of USIA include: (1) multiple interaction modes and (2) user-specific and stereotype modeling and adaptation. This prototype system provides us with a development platform towards the realization of an operational information ecology. In the first phase of this project we focus on the design and implementation of prototype system of the User-System Interface Agent (USIA). The second face of USIA allows user interaction via a restricted query language as well as through a taxonomy of windows. In third phase the USIA system architecture was revised.
Towards a holistic assessment of the user experience with hybrid BCIs.
Lorenz, Romy; Pascual, Javier; Blankertz, Benjamin; Vidaurre, Carmen
2014-06-01
In recent years, brain-computer interfaces (BCIs) have become mature enough to immensely benefit from the expertise and tools established in the field of human-computer interaction (HCI). One of the core objectives in HCI research is the design of systems that provide a pleasurable user experience (UX). While the majority of BCI studies exclusively evaluate common efficiency measures such as classification accuracy and speed, single research groups have begun to look at further usability aspects such as ease of use, workload and learnability. However, these evaluation metrics only cover pragmatic aspects of UX while still not considering the hedonic quality of UX. In order to gain a holistic perspective on UX, hedonic quality aspects such as motivation and frustration were also taken into account for our evaluation of three BCI-driven interfaces, which were proposed to be used as a two-stage neuroprosthetic control within the EU project MUNDUS. At the first stage, one of six possible actions was selected and either confirmed or cancelled at the second stage. For the experiment, a solely event-related-potential-based interface (ERP-ERP) and two hybrid solutions were tested that were controlled by ERP and motor imagery (MI)--resulting in the two possible combinations: ERP selection/MI confirmation (ERP-MI) or MI selection/ERP confirmation (MI-ERP). Behavioural, subjective and encephalographic (EEG) data of 12 healthy subjects were collected during an online experiment with the three graphical user interfaces (GUIs). Results showed a significantly greater pragmatic quality (in terms of accuracy, efficiency, workload, use quality and learnability) for the ERP-ERP and ERP-MI GUIs in contrast to the MI-ERP GUI. Consequently, the MI-ERP GUI is least suited for use as a neuroprosthetic control. With respect to the comparison of the ERP-ERP and ERP-MI GUIs, no significant differences in pragmatic and hedonic quality of UX were found. Since throughout better results were obtained for the conventional approach and it was most preferred by the subjects, the ERP-ERP GUI seems more suitable for its deployment in actual end-users. Nevertheless, for individuals with stable MI patterns, the hybrid interface can be provided as an additional option of choice within the MUNDUS framework. Although the paramount goal in BCI research still remains the improvement of classification accuracy and communication speed, it is of significance to note that it is equally important for end-users to keep up their motivation and prevent frustration. By including pragmatic as well as hedonic quality aspects, this study is the first effort to gain a holistic perspective of the UX while interacting with BCI-driven assistive technology aimed at actual end-users. The broad-scale methodology provided valuable insights into the underlying dynamics causing the users' experience to differ across the GUIs. The results will be used to refine a BCI-driven neuroprosthesis and test it with end-users.
Three-dimensional user interfaces for scientific visualization
NASA Technical Reports Server (NTRS)
VanDam, Andries (Principal Investigator)
1996-01-01
The focus of this grant was to experiment with novel user interfaces for scientific visualization applications using both desktop and virtual reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past three years, and subsumes all prior reports.
The intelligent user interface for NASA's advanced information management systems
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas, Jr.; Rolofs, Larry H.; Wattawa, Scott L.
1987-01-01
NASA has initiated the Intelligent Data Management Project to design and develop advanced information management systems. The project's primary goal is to formulate, design and develop advanced information systems that are capable of supporting the agency's future space research and operational information management needs. The first effort of the project was the development of a prototype Intelligent User Interface to an operational scientific database, using expert systems and natural language processing technologies. An overview of Intelligent User Interface formulation and development is given.