NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1993-01-01
This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.
Human Factors Considerations in System Design
NASA Technical Reports Server (NTRS)
Mitchell, C. M. (Editor); Vanbalen, P. M. (Editor); Moe, K. L. (Editor)
1983-01-01
Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Modeling Human-Computer Decision Making with Covariance Structure Analysis.
ERIC Educational Resources Information Center
Coovert, Michael D.; And Others
Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…
Language evolution and human-computer interaction
NASA Technical Reports Server (NTRS)
Grudin, Jonathan; Norman, Donald A.
1991-01-01
Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.
Occupational stress in human computer interaction.
Smith, M J; Conway, F T; Karsh, B T
1999-04-01
There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.
The Promise of Interactive Video: An Affective Search.
ERIC Educational Resources Information Center
Hon, David
1983-01-01
Argues that factors that create a feeling of interactivity in the human situation--response time, spontaneity, lack of distractors--should be included as prime elements in the design of human/machine systems, e.g., computer assisted instruction and interactive video. A computer/videodisc learning system for cardio-pulmonary resuscitation and its…
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of people interacting with each other, not only human-machine interaction. The primary concern is not how people can interact with computers, but how shall we design computers to help people work together? An analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse.
Portable tongue-supported human computer interaction system design and implementation.
Quain, Rohan; Khan, Masood Mehmood
2014-01-01
Tongue supported human-computer interaction (TSHCI) systems can help critically ill patients interact with both computers and people. These systems can be particularly useful for patients suffering injuries above C7 on their spinal vertebrae. Despite recent successes in their application, several limitations restrict performance of existing TSHCI systems and discourage their use in real life situations. This paper proposes a low-cost, less-intrusive, portable and easy to use design for implementing a TSHCI system. Two applications of the proposed system are reported. Design considerations and performance of the proposed system are also presented.
ERIC Educational Resources Information Center
Oren, Michael Anthony
2011-01-01
The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…
Human-Computer Interaction and Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler)
1995-01-01
The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.
Conceptualizing, Designing, and Investigating Locative Media Use in Urban Space
NASA Astrophysics Data System (ADS)
Diamantaki, Katerina; Rizopoulos, Charalampos; Charitos, Dimitris; Kaimakamis, Nikos
This chapter investigates the social implications of locative media (LM) use and attempts to outline a theoretical framework that may support the design and implementation of location-based applications. Furthermore, it stresses the significance of physical space and location awareness as important factors that influence both human-computer interaction and computer-mediated communication. The chapter documents part of the theoretical aspect of the research undertaken as part of LOcation-based Communication Urban NETwork (LOCUNET), a project that aims to investigate the way users interact with one another (human-computer-human interaction aspect) and with the location-based system itself (human-computer interaction aspect). A number of relevant theoretical approaches are discussed in an attempt to provide a holistic theoretical background for LM use. Additionally, the actual implementation of the LOCUNET system is described and some of the findings are discussed.
Computer Human Interaction for Image Information Systems.
ERIC Educational Resources Information Center
Beard, David Volk
1991-01-01
Presents an approach to developing viable image computer-human interactions (CHI) involving user metaphors for comprehending image data and methods for locating, accessing, and displaying computer images. A medical-image radiology workstation application is used as an example, and feedback and evaluation methods are discussed. (41 references) (LRW)
What Machines Need to Learn to Support Human Problem-Solving
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
In the development of intelligent systems that interact with humans, there is often confusion between how the system functions with respect to the humans it interacts with and how it interfaces with those humans. The former is a much deeper challenge than the latter it requires a system-level understanding of evolving human roles as well as an understanding of what humans need to know (and when) in order to perform their tasks. This talk will focus on some of the challenges in getting this right as well as on the type of research and development that results in successful human-autonomy teaming. Brief Bio: Dr. Alonso Vera is Chief of the Human Systems Integration Division at NASA Ames Research Center. His expertise is in human-computer interaction, information systems, artificial intelligence, and computational human performance modeling. He has led the design, development and deployment of mission software systems across NASA robotic and human space flight missions, including Mars Exploration Rovers, Phoenix Mars Lander, ISS, Constellation, and Exploration Systems. Dr. Vera received a Bachelor of Science with First Class Honors from McGill University in 1985 and a Ph.D. from Cornell University in 1991. He went on to a Post-Doctoral Fellowship in the School of Computer Science at Carnegie Mellon University from 1990-93.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.
1976-01-01
An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.
Cyberpsychology: a human-interaction perspective based on cognitive modeling.
Emond, Bruno; West, Robert L
2003-10-01
This paper argues for the relevance of cognitive modeling and cognitive architectures to cyberpsychology. From a human-computer interaction point of view, cognitive modeling can have benefits both for theory and model building, and for the design and evaluation of sociotechnical systems usability. Cognitive modeling research applied to human-computer interaction has two complimentary objectives: (1) to develop theories and computational models of human interactive behavior with information and collaborative technologies, and (2) to use the computational models as building blocks for the design, implementation, and evaluation of interactive technologies. From the perspective of building theories and models, cognitive modeling offers the possibility to anchor cyberpsychology theories and models into cognitive architectures. From the perspective of the design and evaluation of socio-technical systems, cognitive models can provide the basis for simulated users, which can play an important role in usability testing. As an example of application of cognitive modeling to technology design, the paper presents a simulation of interactive behavior with five different adaptive menu algorithms: random, fixed, stacked, frequency based, and activation based. Results of the simulation indicate that fixed menu positions seem to offer the best support for classification like tasks such as filing e-mails. This research is part of the Human-Computer Interaction, and the Broadband Visual Communication research programs at the National Research Council of Canada, in collaboration with the Carleton Cognitive Modeling Lab at Carleton University.
Eye Tracking Based Control System for Natural Human-Computer Interaction
Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528
Eye Tracking Based Control System for Natural Human-Computer Interaction.
Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
Making intelligent systems team players: Additional case studies
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Rhoads, Ron W.
1993-01-01
Observations from a case study of intelligent systems are reported as part of a multi-year interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. A series of studies were conducted to investigate issues in designing intelligent fault management systems in aerospace applications for effective human-computer interaction. The results of the initial study are documented in two NASA technical memoranda: TM 104738 Making Intelligent Systems Team Players: Case Studies and Design Issues, Volumes 1 and 2; and TM 104751, Making Intelligent Systems Team Players: Overview for Designers. The objective of this additional study was to broaden the investigation of human-computer interaction design issues beyond the focus on monitoring and fault detection in the initial study. The results of this second study are documented which is intended as a supplement to the original design guidance documents. These results should be of interest to designers of intelligent systems for use in real-time operations, and to researchers in the areas of human-computer interaction and artificial intelligence.
ERIC Educational Resources Information Center
Klein, David C.
2014-01-01
As advancements in automation continue to alter the systemic behavior of computer systems in a wide variety of industrial applications, human-machine interactions are increasingly becoming supervisory in nature, with less hands-on human involvement. This maturing of the human role within the human-computer relationship is relegating operations…
Employing Textual and Facial Emotion Recognition to Design an Affective Tutoring System
ERIC Educational Resources Information Center
Lin, Hao-Chiang Koong; Wang, Cheng-Hung; Chao, Ching-Ju; Chien, Ming-Kuan
2012-01-01
Emotional expression in Artificial Intelligence has gained lots of attention in recent years, people applied its affective computing not only in enhancing and realizing the interaction between computers and human, it also makes computer more humane. In this study, emotional expressions were applied into intelligent tutoring system, where learners'…
Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
A Framework and Implementation of User Interface and Human-Computer Interaction Instruction
ERIC Educational Resources Information Center
Peslak, Alan
2005-01-01
Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Human computer interface guide, revision A
NASA Technical Reports Server (NTRS)
1993-01-01
The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.
1981-02-01
Continue on tevetee «Id* If necemtery mid Identify br black number) Battlefield automated systems Human- computer interaction. Design criteria System...Report (this report) In-Depth Analyses of Individual Systems A. Tactical Fire Direction System (TACFIRE) (RP 81-26) B. Tactical Computer Terminal...select the design features and operating procedures of the human- computer Interface which best match the require- ments and capabilities of anticipated
Pfeiffer, Ulrich J; Schilbach, Leonhard; Timmermans, Bert; Kuzmanovic, Bojana; Georgescu, Alexandra L; Bente, Gary; Vogeley, Kai
2014-11-01
There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
VanLehn, Kurt
2011-01-01
This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. "No tutoring" refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and…
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
Interpersonal Biocybernetics: Connecting Through Social Psychophysiology
NASA Technical Reports Server (NTRS)
Pope, Alan T.; Stephens, Chad L.
2012-01-01
One embodiment of biocybernetic adaptation is a human-computer interaction system designed such that physiological signals modulate the effect that control of a task by other means, usually manual control, has on performance of the task. Such a modulation system enables a variety of human-human interactions based upon physiological self-regulation performance. These interpersonal interactions may be mixes of competition and cooperation for simulation training and/or videogame entertainment
Visual Debugging of Object-Oriented Systems With the Unified Modeling Language
2004-03-01
to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
NASA Astrophysics Data System (ADS)
Zou, Jie; Gattani, Abhishek
2005-01-01
When completely automated systems don't yield acceptable accuracy, many practical pattern recognition systems involve the human either at the beginning (pre-processing) or towards the end (handling rejects). We believe that it may be more useful to involve the human throughout the recognition process rather than just at the beginning or end. We describe a methodology of interactive visual recognition for human-centered low-throughput applications, Computer Assisted Visual InterActive Recognition (CAVIAR), and discuss the prospects of implementing CAVIAR over the Internet. The novelty of CAVIAR is image-based interaction through a domain-specific parameterized geometrical model, which reduces the semantic gap between humans and computers. The user may interact with the computer anytime that she considers its response unsatisfactory. The interaction improves the accuracy of the classification features by improving the fit of the computer-proposed model. The computer makes subsequent use of the parameters of the improved model to refine not only its own statistical model-fitting process, but also its internal classifier. The CAVIAR methodology was applied to implement a flower recognition system. The principal conclusions from the evaluation of the system include: 1) the average recognition time of the CAVIAR system is significantly shorter than that of the unaided human; 2) its accuracy is significantly higher than that of the unaided machine; 3) it can be initialized with as few as one training sample per class and still achieve high accuracy; and 4) it demonstrates a self-learning ability. We have also implemented a Mobile CAVIAR system, where a pocket PC, as a client, connects to a server through wireless communication. The motivation behind a mobile platform for CAVIAR is to apply the methodology in a human-centered pervasive environment, where the user can seamlessly interact with the system for classifying field-data. Deploying CAVIAR to a networked mobile platform poses the challenge of classifying field images and programming under constraints of display size, network bandwidth, processor speed, and memory size. Editing of the computer-proposed model is performed on the handheld while statistical model fitting and classification take place on the server. The possibility that the user can easily take several photos of the object poses an interesting information fusion problem. The advantage of the Internet is that the patterns identified by different users can be pooled together to benefit all peer users. When users identify patterns with CAVIAR in a networked setting, they also collect training samples and provide opportunities for machine learning from their intervention. CAVIAR implemented over the Internet provides a perfect test bed for, and extends, the concept of Open Mind Initiative proposed by David Stork. Our experimental evaluation focuses on human time, machine and human accuracy, and machine learning. We devoted much effort to evaluating the use of our image-based user interface and on developing principles for the evaluation of interactive pattern recognition system. The Internet architecture and Mobile CAVIAR methodology have many applications. We are exploring in the directions of teledermatology, face recognition, and education.
Kohrs, Christin; Hrabal, David; Angenstein, Nicole; Brechmann, André
2014-11-01
System response time research is an important issue in human-computer interactions. Experience with technical devices and general rules of human-human interactions determine the user's expectation, and any delay in system response time may lead to immediate physiological, emotional, and behavioral consequences. We investigated such effects on a trial-by-trial basis during a human-computer interaction by measuring changes in skin conductance (SC), heart rate (HR), and the dynamics of button press responses. We found an increase in SC and a deceleration of HR for all three delayed system response times (0.5, 1, 2 s). Moreover, the data on button press dynamics was highly informative since subjects repeated a button press with more force in response to delayed system response times. Furthermore, the button press dynamics could distinguish between correct and incorrect decisions and may thus even be used to infer the uncertainty of a user's decision. Copyright © 2014 Society for Psychophysiological Research.
Real-time 3D human capture system for mixed-reality art and entertainment.
Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu
2005-01-01
A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
NASA Astrophysics Data System (ADS)
Cheok, Adrian David
This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Woods, David D.; Potter, Scott S.; Johannesen, Leila; Holloway, Matthew; Forbus, Kenneth D.
1991-01-01
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design.
The experience of agency in human-computer interactions: a review
Limerick, Hannah; Coyle, David; Moore, James W.
2014-01-01
The sense of agency is the experience of controlling both one’s body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied “real-life” situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces. PMID:25191256
NASA Astrophysics Data System (ADS)
Clarke, L.
2017-12-01
Integrated assessment (IA) modeling and research has a long history, spanning over 30 years since its inception and addressing a wide range of contemporary issues along the way. Over the last decade, IA modeling and research has emerged as one of the primary analytical methods for understanding the complex interactions between human and natural systems, from the interactions between energy, water, and land/food systems to the interplay between health, climate, and air pollution. IA modeling and research is particularly well-suited for the analysis of these interactions because it is a discipline that strives to integrate representations of multiple systems into consistent computational platforms or frameworks. In doing so, it explicitly confronts the many tradeoffs that are frequently necessary to manage complexity and computational cost while still representing the most important interactions and overall, coupled system behavior. This talk explores the history of IA modeling and research as a means to better understand its role in the assessment of contemporary issues at the confluence of human and natural systems. It traces the evolution of IA modeling and research from initial exploration of long-term emissions pathways, to the role of technology in the global evolution of the energy system, to the key linkages between land and energy systems and, more recently, the linkages with water, air pollution, and other key systems and issues. It discusses the advances in modeling that have emerged over this evolution and the biggest challenges that still present themselves as we strive to better understand the most important interactions between human and natural systems and the implications of these interactions for human welfare and decision making.
Applications of airborne ultrasound in human-computer interaction.
Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre
2014-09-01
Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Schulte, Erin
2017-01-01
As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.
Parkinson Patients' Initial Trust in Avatars: Theory and Evidence.
Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René
2016-01-01
Parkinson's disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients' interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research.
Parkinson Patients’ Initial Trust in Avatars: Theory and Evidence
Javor, Andrija; Ransmayr, Gerhard; Struhal, Walter; Riedl, René
2016-01-01
Parkinson’s disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients’ interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research. PMID:27820864
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
Human-computer interaction: psychological aspects of the human use of computing.
Olson, Gary M; Olson, Judith S
2003-01-01
Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.
Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)
NASA Technical Reports Server (NTRS)
Savely, Robert T. (Editor)
1991-01-01
The papers from the symposium are presented. Emphasis is placed on human factors engineering and space environment interactions. The technical areas covered in the human factors section include: satellite monitoring and control, man-computer interfaces, expert systems, AI/robotics interfaces, crew system dynamics, and display devices. The space environment interactions section presents the following topics: space plasma interaction, spacecraft contamination, space debris, and atomic oxygen interaction with materials. Some of the above topics are discussed in relation to the space station and space shuttle.
Investigations in Computer-Aided Instruction and Computer-Aided Controls. Final Report.
ERIC Educational Resources Information Center
Rosenberg, R.C.; And Others
These research projects, designed to delve into certain relationships between humans and computers, are focused on computer-assisted instruction and on man-computer interaction. One study demonstrates that within the limits of formal engineering theory, a computer simulated laboratory (Dynamic Systems Laboratory) can be built in which freshmen…
Computer-generated forces in distributed interactive simulation
NASA Astrophysics Data System (ADS)
Petty, Mikel D.
1995-04-01
Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.
Human-Computer Interaction, Tourism and Cultural Heritage
NASA Astrophysics Data System (ADS)
Cipolla Ficarra, Francisco V.
We present a state of the art of the human-computer interaction aimed at tourism and cultural heritage in some cities of the European Mediterranean. In the work an analysis is made of the main problems deriving from training understood as business and which can derail the continuous growth of the HCI, the new technologies and tourism industry. Through a semiotic and epistemological study the current mistakes in the context of the interrelations of the formal and factual sciences will be detected and also the human factors that have an influence on the professionals devoted to the development of interactive systems in order to safeguard and boost cultural heritage.
Getting seamless care right from the beginning - integrating computers into the human interaction.
Pearce, Christopher; Kumarpeli, Pushpa; de Lusignan, Simon
2010-01-01
The digital age is coming to the health space, behind many other fields of society. In part this is because health remains heavily reliant on human interaction. The doctor-patient relationship remains a significant factor in determining patient outcomes. Whilst there are many benefits to E-Health, there are also significant risks if computers are not adequately integrated into this interaction and accurate data are consequently not available on the patient's journey through the health system. Video analysis of routine clinical consultations in Australian and UK primary care. We analyzed 308 consultations (141+167 respectively) from these systems, with an emphasis on how the consultation starts. Australian consultations have a mean duration of 12.7 mins, UK 11.8 mins. In both countries around 7% of consultations are computer initiated. Where doctors engaged with computer use the patient observed the computer screen much more and better records were produced. However, there was suboptimal engagement and poor records and no coding in around 20% of consultations. How the computer is used at the start of the consultation can set the scene for an effective interaction or reflect disengagement from technology and creation of poor records.
40 CFR 86.010-2 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...
40 CFR 86.010-2 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...
40 CFR 86.010-2 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...
40 CFR 86.010-2 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...
40 CFR 86.010-2 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... diagnostics, means verifying that a component and/or system that receives information from a control computer... maintained. In general, limp-home operation implies that a component or system is not operating properly or... cannot be erased through human interaction with the OBD system or any onboard computer. Potential...
Computer aided systems human engineering: A hypermedia tool
NASA Technical Reports Server (NTRS)
Boff, Kenneth R.; Monk, Donald L.; Cody, William J.
1992-01-01
The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.
People and computers--some recent highlights.
Shackel, B
2000-12-01
This paper aims to review selectively a fair proportion of the literature on human-computer interaction (HCI) over the three years since Shackel (J. Am. Soc. Inform. Sci. 48 (11) (1997) 970-986). After a brief note of history I discuss traditional input, output and workplace aspects, the web and 'E-topics', web-related aspects, virtual reality, safety-critical systems, and the need to move from HCI to human-system integration (HSI). Finally I suggest, and consider briefly, some future possibilities and issues including web consequences, embedded ubiquitous computing, and 'back to systems ergonomics?'.
Trends in Human-Computer Interaction to Support Future Intelligence Analysis Capabilities
2011-06-01
that allows data to be moved between different computing systems and displays. Figure 4- G-Speak gesture interaction (Oblong, 2011) 5.2 Multitouch ... Multitouch refers to a touchscreen interaction technique in which multiple simultaneous touchpoints and movements can be detected and used to...much of the style of interaction (such as rotate, pinch, zoom and flick movements) found in multitouch devices but can typically recognize more than
Definition Of Touch-Sensitive Zones For Graphical Displays
NASA Technical Reports Server (NTRS)
Monroe, Burt L., III; Jones, Denise R.
1988-01-01
Touch zones defined simply by touching, while editing done automatically. Development of touch-screen interactive computing system, tedious task. Interactive Editor for Definition of Touch-Sensitive Zones computer program increases efficiency of human/machine communications by enabling user to define each zone interactively, minimizing redundancy in programming and eliminating need for manual computation of boundaries of touch areas. Information produced during editing process written to data file, to which access gained when needed by application program.
ERIC Educational Resources Information Center
Weller, Herman G.; Hartson, H. Rex
1992-01-01
Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…
Man Machine Systems in Education.
ERIC Educational Resources Information Center
Sall, Malkit S.
This review of the research literature on the interaction between humans and computers discusses how man machine systems can be utilized effectively in the learning-teaching process, especially in secondary education. Beginning with a definition of man machine systems and comments on the poor quality of much of the computer-based learning material…
NASA Astrophysics Data System (ADS)
Obermayer, Richard W.; Nugent, William A.
2000-11-01
The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.
Evolving technologies for Space Station Freedom computer-based workstations
NASA Technical Reports Server (NTRS)
Jensen, Dean G.; Rudisill, Marianne
1990-01-01
Viewgraphs on evolving technologies for Space Station Freedom computer-based workstations are presented. The human-computer computer software environment modules are described. The following topics are addressed: command and control workstation concept; cupola workstation concept; Japanese experiment module RMS workstation concept; remote devices controlled from workstations; orbital maneuvering vehicle free flyer; remote manipulator system; Japanese experiment module exposed facility; Japanese experiment module small fine arm; flight telerobotic servicer; human-computer interaction; and workstation/robotics related activities.
Learning by Communicating in Natural Language with Conversational Agents
ERIC Educational Resources Information Center
Graesser, Arthur; Li, Haiying; Forsyth, Carol
2014-01-01
Learning is facilitated by conversational interactions both with human tutors and with computer agents that simulate human tutoring and ideal pedagogical strategies. In this article, we describe some intelligent tutoring systems (e.g., AutoTutor) in which agents interact with students in natural language while being sensitive to their cognitive…
A Kinect-Based Assessment System for Smart Classroom
ERIC Educational Resources Information Center
Kumara, W. G. C. W.; Wattanachote, Kanoksak; Battulga, Batbaatar; Shih, Timothy K.; Hwang, Wu-Yuin
2015-01-01
With the advancements of the human computer interaction field, nowadays it is possible for the users to use their body motions, such as swiping, pushing and moving, to interact with the content of computers or smart phones without traditional input devices like mouse and keyboard. With the introduction of gesture-based interface Kinect from…
Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments
Víctor Rodrigo, Mercado-García
2017-01-01
Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities. PMID:29317861
Emerging Computer Media: On Image Interaction
NASA Astrophysics Data System (ADS)
Lippman, Andrew B.
1982-01-01
Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien
2012-09-01
This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.
Human Machine Interfaces for Teleoperators and Virtual Environments Conference
NASA Technical Reports Server (NTRS)
1990-01-01
In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.
Is Human-Computer Interaction Social or Parasocial?
ERIC Educational Resources Information Center
Sundar, S. Shyam
Conducted in the attribution-research paradigm of social psychology, a study examined whether human-computer interaction is fundamentally social (as in human-human interaction) or parasocial (as in human-television interaction). All 30 subjects (drawn from an undergraduate class on communication) were exposed to an identical interaction with…
Automated social skills training with audiovisual information.
Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi
2016-08-01
People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.
A Framework for Modeling Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.
Why Adolescents Use a Computer-Based Health Information System.
ERIC Educational Resources Information Center
Hawkins, Robert P.; And Others
The Body Awareness Resource Network (BARN) is a system of interactive computer programs designed to provide adolescents with confidential, nonjudgmental health information, behavior change strategies, and sources of referral. These programs cover five adolescent health areas: alcohol and other drugs, human sexuality, smoking prevention and…
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.
Rempel, David; Camilleri, Matt J; Lee, David L
2015-10-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.
Human/computer control of undersea teleoperators
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.
1978-01-01
The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.
1991-07-01
authoring systems. Concurrently, great strides in computer-aided design and computer-aided maintenance have contributed to this capability. 12 Junod ...J.; William A. Nugent; and L. John Junod . Plan for the Navy/Air Force Test of the Interactive Electronic Technical Manual (IETM) at Cecil Field...AFHRL Logistics and Human Factors Division, WPAFB. Aug 1990. 12. Junod , John L. PY90 Interactive Electronic Technical Manual (IETM) Portable Delivery
A Dynamic Dialog System Using Semantic Web Technologies
ERIC Educational Resources Information Center
Ababneh, Mohammad
2014-01-01
A dialog system or a conversational agent provides a means for a human to interact with a computer system. Dialog systems use text, voice and other means to carry out conversations with humans in order to achieve some objective. Most dialog systems are created with specific objectives in mind and consist of preprogrammed conversations. The primary…
On the Rhetorical Contract in Human-Computer Interaction.
ERIC Educational Resources Information Center
Wenger, Michael J.
1991-01-01
An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…
Recent technology products from Space Human Factors research
NASA Technical Reports Server (NTRS)
Jenkins, James P.
1991-01-01
The goals of the NASA Space Human Factors program and the research carried out concerning human factors are discussed with emphasis given to the development of human performance models, data, and tools. The major products from this program are described, which include the Laser Anthropometric Mapping System; a model of the human body for evaluating the kinematics and dynamics of human motion and strength in microgravity environment; an operational experience data base for verifying and validating the data repository of manned space flights; the Operational Experience Database Taxonomy; and a human-computer interaction laboratory whose products are the display softaware and requirements and the guideline documents and standards for applications on human-computer interaction. Special attention is given to the 'Convoltron', a prototype version of a signal processor for synthesizing the head-related transfer functions.
Human-Computer Interaction and Information Management Research Needs
2003-10-01
Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be...hand-held personal digital assistants, networked sensors and actuators, and low-power computers on satellites. 5 most complex tools that humans have...calculations using data on external media such as tapes evolved into our multi-functional 21st century systems. More ideas came as networks of computing
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
1983-08-01
AD- R136 99 THE INTEGRATED MISSION-PLNNING STATION: FUNCTIONAL 1/3 REQUIREMENTS AVIATOR-..(U) RNACAPR SCIENCES INC SANTA BARBARA CA S P ROGERS RUG...Continue on reverse side o necess.ar and identify by btock number) Interactive Systems Aviation Control-Display Functional Require- Plan-Computer...Dialogue Avionics Systems ments Map Display Army Aviation Design Criteria Helicopters M4ission Planning Cartography Digital Map Human Factors Navigation
The Voice as Computer Interface: A Look at Tomorrow's Technologies.
ERIC Educational Resources Information Center
Lange, Holley R.
1991-01-01
Discussion of voice as the communications device for computer-human interaction focuses on voice recognition systems for use within a library environment. Voice technologies are described, including voice response and voice recognition; examples of voice systems in use in libraries are examined; and further possibilities, including use with…
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
Managing Computer Systems Development: Understanding the Human and Technological Imperatives.
1985-06-01
for their organization’s use? How can they predict tle impact of future systems ca their management control capabilities ? Cf equal importance is the...commercial organizations discovered that there was only a limited capability of interaction between various types of computers. These organizations were...Viewed together, these three interrelated subsystems, EDP, MIS, and DSS, establish the framework of an overall systems capability known as a Computer
NASA Technical Reports Server (NTRS)
Johnson, David W.
1992-01-01
Virtual realities are a type of human-computer interface (HCI) and as such may be understood from a historical perspective. In the earliest era, the computer was a very simple, straightforward machine. Interaction was human manipulation of an inanimate object, little more than the provision of an explicit instruction set to be carried out without deviation. In short, control resided with the user. In the second era of HCI, some level of intelligence and control was imparted to the system to enable a dialogue with the user. Simple context sensitive help systems are early examples, while more sophisticated expert system designs typify this era. Control was shared more equally. In this, the third era of the HCI, the constructed system emulates a particular environment, constructed with rules and knowledge about 'reality'. Control is, in part, outside the realm of the human-computer dialogue. Virtual reality systems are discussed.
Design for interaction between humans and intelligent systems during real-time fault management
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.
1992-01-01
Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.
2017-11-13
behavior . The International Journal of Human-Computer Studies , 108, 105-121. https://doi.org/10.1016/j.ijhcs.2017.06.006 A second journal article...documenting the erroneous behavior generation approach and the case study analyses is currently being written. Planned submission is Spring 2017. RPPR...Belvoir, 2010. [3] A task-based taxonomy of erroneous human behavior . International Journal of Human-Computer Studies , 108:105–121, 2017. [4] M. L
ERIC Educational Resources Information Center
Lee, Eun-Ju; Nass, Clifford
2002-01-01
Presents two experiments to address the questions of if and how normative social influence operates in anonymous computer-mediated communication and human-computer interaction. Finds that the perception of interaction partner (human vs. computer) moderated the group conformity effect such that the undergraduate student subjects expressed greater…
Intelligent Context-Aware and Adaptive Interface for Mobile LBS
Liu, Yanhong
2015-01-01
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077
2011-01-01
Background Due to the increasing functionality of medical information systems, it is hard to imagine day to day work in hospitals without IT support. Therefore, the design of dialogues between humans and information systems is one of the most important issues to be addressed in health care. This survey presents an analysis of the current quality level of human-computer interaction of healthcare-IT in German hospitals, focused on the users' point of view. Methods To evaluate the usability of clinical-IT according to the design principles of EN ISO 9241-10 the IsoMetrics Inventory, an assessment tool, was used. The focus of this paper has been put on suitability for task, training effort and conformity with user expectations, differentiated by information systems. Effectiveness has been evaluated with the focus on interoperability and functionality of different IT systems. Results 4521 persons from 371 hospitals visited the start page of the study, while 1003 persons from 158 hospitals completed the questionnaire. The results show relevant variations between different information systems. Conclusions Specialised information systems with defined functionality received better assessments than clinical information systems in general. This could be attributed to the improved customisation of these specialised systems for specific working environments. The results can be used as reference data for evaluation and benchmarking of human computer engineering in clinical health IT context for future studies. PMID:22070880
Norm-Aware Socio-Technical Systems
NASA Astrophysics Data System (ADS)
Savarimuthu, Bastin Tony Roy; Ghose, Aditya
The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions
Developing the human-computer interface for Space Station Freedom
NASA Technical Reports Server (NTRS)
Holden, Kritina L.
1991-01-01
For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.
Eye-movements and Voice as Interface Modalities to Computer Systems
NASA Astrophysics Data System (ADS)
Farid, Mohsen M.; Murtagh, Fionn D.
2003-03-01
We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.
Portable computing - A fielded interactive scientific application in a small off-the-shelf package
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Hazelton, Lyman; Frainier, Rich; Compton, Michael; Colombano, Silvano; Szolovits, Peter
1993-01-01
Experience with the design and implementation of a portable computing system for STS crew-conducted science is discussed. Principal-Investigator-in-a-Box (PI) will help the SLS-2 astronauts perform vestibular (human orientation system) experiments in flight. PI is an interactive system that provides data acquisition and analysis, experiment step rescheduling, and various other forms of reasoning to astronaut users. The hardware architecture of PI consists of a computer and an analog interface box. 'Off-the-shelf' equipment is employed in the system wherever possible in an effort to use widely available tools and then to add custom functionality and application codes to them. Other projects which can help prospective teams to learn more about portable computing in space are also discussed.
Enhancing Learning through Human Computer Interaction
ERIC Educational Resources Information Center
McKay, Elspeth, Ed.
2007-01-01
Enhancing Learning Through Human Computer Interaction is an excellent reference source for human computer interaction (HCI) applications and designs. This "Premier Reference Source" provides a complete analysis of online business training programs and e-learning in the higher education sector. It describes a range of positive outcomes for linking…
Implicit prosody mining based on the human eye image capture technology
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2013-08-01
The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.
Facial expression system on video using widrow hoff
NASA Astrophysics Data System (ADS)
Jannah, M.; Zarlis, M.; Mawengkang, H.
2018-03-01
Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.
Information Interaction: Providing a Framework for Information Architecture.
ERIC Educational Resources Information Center
Toms, Elaine G.
2002-01-01
Discussion of information architecture focuses on a model of information interaction that bridges the gap between human and computer and between information behavior and information retrieval. Illustrates how the process of information interaction is affected by the user, the system, and the content. (Contains 93 references.) (LRW)
NASA Astrophysics Data System (ADS)
Morse, P. E.; Reading, A. M.; Lueg, C.
2014-12-01
Pattern-recognition in scientific data is not only a computational problem but a human-observer problem as well. Human observation of - and interaction with - data visualization software can augment, select, interrupt and modify computational routines and facilitate processes of pattern and significant feature recognition for subsequent human analysis, machine learning, expert and artificial intelligence systems.'Tagger' is a Mac OS X interactive data visualisation tool that facilitates Human-Computer interaction for the recognition of patterns and significant structures. It is a graphical application developed using the Quartz Composer framework. 'Tagger' follows a Model-View-Controller (MVC) software architecture: the application problem domain (the model) is to facilitate novel ways of abstractly representing data to a human interlocutor, presenting these via different viewer modalities (e.g. chart representations, particle systems, parametric geometry) to the user (View) and enabling interaction with the data (Controller) via a variety of Human Interface Devices (HID). The software enables the user to create an arbitrary array of tags that may be appended to the visualised data, which are then saved into output files as forms of semantic metadata. Three fundamental problems that are not strongly supported by conventional scientific visualisation software are addressed:1] How to visually animate data over time, 2] How to rapidly deploy unconventional parametrically driven data visualisations, 3] How to construct and explore novel interaction models that capture the activity of the end-user as semantic metadata that can be used to computationally enhance subsequent interrogation. Saved tagged data files may be loaded into Tagger, so that tags may be tagged, if desired. Recursion opens up the possibility of refining or overlapping different types of tags, tagging a variety of different POIs or types of events, and of capturing different types of specialist observations of important or noticeable events. Other visualisations and modes of interaction will also be demonstrated, with the aim of discovering knowledge in large datasets in the natural, physical sciences. Fig.1 Wave height data from an oceanographic Wave Rider Buoy. Colors/radii are driven by wave height data.
Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction
2011-01-01
Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed
2014-07-08
internction ( BCI ) system allows h uman subjects to communicate with or control an extemal device with their brain signals [1], or to use those brain...signals to interact with computers, environments, or even other humans [2]. One application of BCI is to use brnin signals to distinguish target...images within a large collection of non-target images [2]. Such BCI -based systems can drastically increase the speed of target identification in
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Hands in space: gesture interaction with augmented-reality interfaces.
Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai
2014-01-01
Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.
Understanding and Managing Causality of Change in Socio-Technical Systems II
2011-01-25
SUBJECT TERMS Cognition , Human Effectiveness, Information Science 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...at large taking into account the cognitive interaction between humans and technology. 8 Hussein Abbass Professor Abbass leads the...Network Centric Operations Future Air Traffic Management Systems Cognitive Engineering including Human-Computer Integration In all of the
2016-05-01
research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic
Wilcox, Lauren; Patel, Rupa; Chen, Yunan; Shachak, Aviv
2013-12-01
Health Information Technologies, such as electronic health records (EHR) and secure messaging, have already transformed interactions among patients and clinicians. In addition, technologies supporting asynchronous communication outside of clinical encounters, such as email, SMS, and patient portals, are being increasingly used for follow-up, education, and data reporting. Meanwhile, patients are increasingly adopting personal tools to track various aspects of health status and therapeutic progress, wishing to review these data with clinicians during consultations. These issues have drawn increasing interest from the human-computer interaction (HCI) community, with special focus on critical challenges in patient-centered interactions and design opportunities that can address these challenges. We saw this community presenting and interacting at the ACM SIGCHI 2013, Conference on Human Factors in Computing Systems, (also known as CHI), held April 27-May 2nd, 2013 at the Palais de Congrès de Paris in France. CHI 2013 featured many formal avenues to pursue patient-centered health communication: a well-attended workshop, tracks of original research, and a lively panel discussion. In this report, we highlight these events and the main themes we identified. We hope that it will help bring the health care communication and the HCI communities closer together. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters
Rempel, David; Camilleri, Matt J.; Lee, David L.
2015-01-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955
Finding Waldo: Learning about Users from their Interactions.
Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco
2014-12-01
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.
Interaction with Machine Improvisation
NASA Astrophysics Data System (ADS)
Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo
We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.
On Roles of Models in Information Systems
NASA Astrophysics Data System (ADS)
Sølvberg, Arne
The increasing penetration of computers into all aspects of human activity makes it desirable that the interplay among software, data and the domains where computers are applied is made more transparent. An approach to this end is to explicitly relate the modeling concepts of the domains, e.g., natural science, technology and business, to the modeling concepts of software and data. This may make it simpler to build comprehensible integrated models of the interactions between computers and non-computers, e.g., interaction among computers, people, physical processes, biological processes, and administrative processes. This chapter contains an analysis of various facets of the modeling environment for information systems engineering. The lack of satisfactory conceptual modeling tools seems to be central to the unsatisfactory state-of-the-art in establishing information systems. The chapter contains a proposal for defining a concept of information that is relevant to information systems engineering.
Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1998-01-01
A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.
Can Robots and Humans Get Along?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2007-06-01
Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.
The Dimensionality and Correlates of Flow in Human-Computer Interactions.
ERIC Educational Resources Information Center
Webster, Jane; And Others
1993-01-01
Defines playfulness in human-computer interactions in terms of flow theory and explores the dimensionality of the flow concept. Two studies are reported that investigated the factor structure and correlates of flow in human-computer interactions: one examined MBA students using Lotus 1-2-3 spreadsheet software, and one examined employees using…
Distributed and collaborative synthetic environments
NASA Technical Reports Server (NTRS)
Bajaj, Chandrajit L.; Bernardini, Fausto
1995-01-01
Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.
Bodily systems and the spatial-functional structure of the human body.
Smith, Barry; Munn, Katherine; Papakin, Igor
2004-01-01
The human body is a system made of systems. The body is divided into bodily systems proper, such as the endocrine and circulatory systems, which are subdivided into many sub-systems at a variety of levels, whereby all systems and subsystems engage in massive causal interaction with each other and with their surrounding environments. Here we offer an explicit definition of bodily system and provide a framework for understanding their causal interactions. Medical sciences provide at best informal accounts of basic notions such as system, process, and function, and while such informality is acceptable in documentation created for human beings, it falls short of what is needed for computer representations. In our analysis we will accordingly provide the framework for a formal definition of bodily system and of associated notions.
Accounting for User Diversity in Configuring Online Systems.
ERIC Educational Resources Information Center
Woolliams, Peter; Gee, David
1992-01-01
Discusses cultural diversity in human-computer interactions and in the design of online systems. Topics addressed include cognitive psychology; North American and European ethnocentricity; online systems and their organizational setting; models for organization culture; corporate culture; international systems and country-specific cultures; and…
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
Toward a formal definition of water scarcity in natural human systems
W.K. Jaeger; A.J. Plantinga; H. Chang; K. Dello; G. Grant; D. Hulse; J.J. McDonnell; S. Lancaster; H. Moradkhani; A.T. Morzillo; P. Mote; A. Nolin; M. Santlemann; J. Wu
2013-01-01
Water scarcity may appear to be a simple concept, but it can be difficult to apply to complex natural-human systems. While aggregate scarcity indices are straightforward to compute, they do not adequately represent the spatial and temporal variations in water scarcity that arise from complex systems interactions. The uncertain effects of future climate change on water...
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
Real-time implementation of an interactive jazz accompaniment system
NASA Astrophysics Data System (ADS)
Deshpande, Nikhil
Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.
1982-03-01
otherwise, and changes in parameters). The TIS, insofar as it has subgoals to reach, instructions ot, how to try or what to do if it is impeded...10 and 9 without affect- ing the computer (i.e. change the location, forces, labels or other properties of the display or manual control devices...sys- mode of inter • -ing with the system tem sensors, actuators and sensors, actu.-.ors and computers is computers is fixed flexible j 4. often
The Study of Surface Computer Supported Cooperative Work and Its Design, Efficiency, and Challenges
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Su, Jia-Han
2012-01-01
In this study, a Surface Computer Supported Cooperative Work paradigm is proposed. Recently, multitouch technology has become widely available for human-computer interaction. We found it has great potential to facilitate more awareness of human-to-human interaction than personal computers (PCs) in colocated collaborative work. However, other…
Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training
Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong
2017-01-01
We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders. PMID:28429757
Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training
NASA Astrophysics Data System (ADS)
Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong
2017-04-01
We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.
Designing the user interface: strategies for effective human-computer interaction
NASA Astrophysics Data System (ADS)
Shneiderman, B.
1998-03-01
In revising this popular book, Ben Shneiderman again provides a complete, current and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. Shneiderman discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. He also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find this book an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with a new discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles such as direct manipulation for graphical user interfaces, and design considerations such as effective messages, consistent screen design, and appropriate color.
Computational prediction of host-pathogen protein-protein interactions.
Dyer, Matthew D; Murali, T M; Sobral, Bruno W
2007-07-01
Infectious diseases such as malaria result in millions of deaths each year. An important aspect of any host-pathogen system is the mechanism by which a pathogen can infect its host. One method of infection is via protein-protein interactions (PPIs) where pathogen proteins target host proteins. Developing computational methods that identify which PPIs enable a pathogen to infect a host has great implications in identifying potential targets for therapeutics. We present a method that integrates known intra-species PPIs with protein-domain profiles to predict PPIs between host and pathogen proteins. Given a set of intra-species PPIs, we identify the functional domains in each of the interacting proteins. For every pair of functional domains, we use Bayesian statistics to assess the probability that two proteins with that pair of domains will interact. We apply our method to the Homo sapiens-Plasmodium falciparum host-pathogen system. Our system predicts 516 PPIs between proteins from these two organisms. We show that pairs of human proteins we predict to interact with the same Plasmodium protein are close to each other in the human PPI network and that Plasmodium pairs predicted to interact with same human protein are co-expressed in DNA microarray datasets measured during various stages of the Plasmodium life cycle. Finally, we identify functionally enriched sub-networks spanned by the predicted interactions and discuss the plausibility of our predictions. Supplementary data are available at http://staff.vbi.vt.edu/dyermd/publications/dyer2007a.html. Supplementary data are available at Bioinformatics online.
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
A new security model for collaborative environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Deborah; Lorch, Markus; Thompson, Mary
Prevalent authentication and authorization models for distributed systems provide for the protection of computer systems and resources from unauthorized use. The rules and policies that drive the access decisions in such systems are typically configured up front and require trust establishment before the systems can be used. This approach does not work well for computer software that moderates human-to-human interaction. This work proposes a new model for trust establishment and management in computer systems supporting collaborative work. The model supports the dynamic addition of new users to a collaboration with very little initial trust placed into their identity and supportsmore » the incremental building of trust relationships through endorsements from established collaborators. It also recognizes the strength of a users authentication when making trust decisions. By mimicking the way humans build trust naturally the model can support a wide variety of usage scenarios. Its particular strength lies in the support for ad-hoc and dynamic collaborations and the ubiquitous access to a Computer Supported Collaboration Workspace (CSCW) system from locations with varying levels of trust and security.« less
Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-03-08
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.
Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-01-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764
1981-02-01
SUMMARY Robert N. Parrish, Jesse L. Gates, and Sarah J. Munger SYNECTICS CORPORATION HUMAN FACTORS TECHNICAL AREA L.. .iU. S. Army Research Institute for...of the Human Factors Technical Area, ARI, is the Contracting Officer’s Representative (COR) for this project. 19. KEY WORDS (Continue on reverse...aide It neceeary ad Identify by block number) Battlefield automated systems Human -computer interaction Design criteria System analysis Design guidelines
Human factors in the Naval Air Systems Command: Computer based training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seamster, T.L.; Snyder, C.E.; Terranova, M.
1988-01-01
Military standards applied to the private sector contracts have a substantial effect on the quality of Computer Based Training (CBT) systems procured for the Naval Air Systems Command. This study evaluated standards regulating the following areas in CBT development and procurement: interactive training systems, cognitive task analysis, and CBT hardware. The objective was to develop some high-level recommendations for evolving standards that will govern the next generation of CBT systems. One of the key recommendations is that there be an integration of the instructional systems development, the human factors engineering, and the software development standards. Recommendations were also made formore » task analysis and CBT hardware standards. (9 refs., 3 figs.)« less
Finding Waldo: Learning about Users from their Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Eli T.; Ottley, Alvitta; Zhao, Helen
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user’s interactions with a system reflect a large amount of the user’s reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user’s task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, wemore » conduct an experiment in which participants perform a visual search task and we apply well-known machine learning algorithms to three encodings of the users interaction data. We achieve, depending on algorithm and encoding, between 62% and 96% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user’s personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time, in some cases, 82% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed- initiative visual analytics systems.« less
Decision making and problem solving with computer assistance
NASA Technical Reports Server (NTRS)
Kraiss, F.
1980-01-01
In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487
Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-27
Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.
Human-Computer Interaction with Medical Decisions Support Systems
NASA Technical Reports Server (NTRS)
Adolf, Jurine A.; Holden, Kritina L.
1994-01-01
Decision Support Systems (DSSs) have been available to medical diagnosticians for some time, yet their acceptance and use have not increased with advances in technology and availability of DSS tools. Medical DSSs will be necessary on future long duration space missions, because access to medical resources and personnel will be limited. Human-Computer Interaction (HCI) experts at NASA's Human Factors and Ergonomics Laboratory (HFEL) have been working toward understanding how humans use DSSs, with the goal of being able to identify and solve the problems associated with these systems. Work to date consists of identification of HCI research areas, development of a decision making model, and completion of two experiments dealing with 'anchoring'. Anchoring is a phenomenon in which the decision maker latches on to a starting point and does not make sufficient adjustments when new data are presented. HFEL personnel have replicated a well-known anchoring experiment and have investigated the effects of user level of knowledge. Future work includes further experimentation on level of knowledge, confidence in the source of information and sequential decision making.
Closed-loop dialog model of face-to-face communication with a photo-real virtual human
NASA Astrophysics Data System (ADS)
Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás
2004-01-01
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Towards Automatic Treatment of Natural Language.
ERIC Educational Resources Information Center
Lonsdale, Deryle
1984-01-01
Because automated natural language processing relies heavily on the still developing fields of linguistics, knowledge representation, and computational linguistics, no system is capable of mimicking human linguistic capabilities. For the present, interactive systems may be used to augment today's technology. (MSE)
Data Analysis Tools and Methods for Improving the Interaction Design in E-Learning
ERIC Educational Resources Information Center
Popescu, Paul Stefan
2015-01-01
In this digital era, learning from data gathered from different software systems may have a great impact on the quality of the interaction experience. There are two main directions that come to enhance this emerging research domain, Intelligent Data Analysis (IDA) and Human Computer Interaction (HCI). HCI specific research methodologies can be…
NASA Technical Reports Server (NTRS)
Chu, Y.-Y.; Rouse, W. B.
1979-01-01
As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.
Eizicovits, Danny; Edan, Yael; Tabak, Iris; Levy-Tzedek, Shelly
2018-01-01
Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.
Implementing Artificial Intelligence Behaviors in a Virtual World
NASA Technical Reports Server (NTRS)
Krisler, Brian; Thome, Michael
2012-01-01
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
The challenge of computer mathematics.
Barendregt, Henk; Wiedijk, Freek
2005-10-15
Progress in the foundations of mathematics has made it possible to formulate all thinkable mathematical concepts, algorithms and proofs in one language and in an impeccable way. This is not in spite of, but partially based on the famous results of Gödel and Turing. In this way statements are about mathematical objects and algorithms, proofs show the correctness of statements and computations, and computations are dealing with objects and proofs. Interactive computer systems for a full integration of defining, computing and proving are based on this. The human defines concepts, constructs algorithms and provides proofs, while the machine checks that the definitions are well formed and the proofs and computations are correct. Results formalized so far demonstrate the feasibility of this 'computer mathematics'. Also there are very good applications. The challenge is to make the systems more mathematician-friendly, by building libraries and tools. The eventual goal is to help humans to learn, develop, communicate, referee and apply mathematics.
MARTI: man-machine animation real-time interface
NASA Astrophysics Data System (ADS)
Jones, Christian M.; Dlay, Satnam S.
1997-05-01
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
Dühring, Sybille; Germerodt, Sebastian; Skerka, Christine; Zipfel, Peter F.; Dandekar, Thomas; Schuster, Stefan
2015-01-01
The diploid, polymorphic yeast Candida albicans is one of the most important human pathogenic fungi. C. albicans can grow, proliferate and coexist as a commensal on or within the human host for a long time. However, alterations in the host environment can render C. albicans virulent. In this review, we describe the immunological cross-talk between C. albicans and the human innate immune system. We give an overview in form of pairs of human defense strategies including immunological mechanisms as well as general stressors such as nutrient limitation, pH, fever etc. and the corresponding fungal response and evasion mechanisms. Furthermore, Computational Systems Biology approaches to model and investigate these complex interactions are highlighted with a special focus on game-theoretical methods and agent-based models. An outlook on interesting questions to be tackled by Systems Biology regarding entangled defense and evasion mechanisms is given. PMID:26175718
Roles for Agent Assistants in Field Science: Understanding Personal Projects and Collaboration
NASA Technical Reports Server (NTRS)
Clancey, William J.
2003-01-01
A human-centered approach to computer systems design involves reframing analysis in terms of the people interacting with each other. The primary concern is not how people can interact with computers, but how shall we design work systems (facilities, tools, roles, and procedures) to help people pursue their personal projects, as they work independently and collaboratively? Two case studies provide empirical requirements. First, an analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse. Second, an analysis of biologists and a geologist working at Haughton Crater in the High Canadian Arctic reveals how work interactions between people involve independent personal projects, sensitively coordinated for mutual benefit. In both cases, an agent or robotic system's role would be to assist people, rather than collaborating, because today's computer systems lack the identity and purpose that consciousness provides.
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2004-12-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2005-01-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Program Predicts Time Courses of Human/Computer Interactions
NASA Technical Reports Server (NTRS)
Vera, Alonso; Howes, Andrew
2005-01-01
CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.
Design Guidelines for CAI Authoring Systems.
ERIC Educational Resources Information Center
Hunka, S.
1989-01-01
Discussion of the use of authoring systems for courseware development focuses on guidelines to be considered when designing authoring systems. Topics discussed include allowing a variety of instructional strategies; interaction with peripheral processes such as student records; the editing process; and human factors in computer interface design,…
The Effectiveness of Gaze-Contingent Control in Computer Games.
Orlov, Paul A; Apraksin, Nikolay
2015-01-01
Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Hammer, John M.
1990-01-01
Ways in which computers can aid the decision making of an human operator of an aerospace system are investigated. The approach taken is to aid rather than replace the human operator, because operational experience has shown that humans can enhance the effectiveness of systems. As systems become more automated, the role of the operator has shifted to that of a manager and problem solver. This shift has created the research area of how to aid the human in this role. Published research in four areas is described. A discussion is presented of the DC-8 flight simulator at Georgia Tech.
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
NASA Technical Reports Server (NTRS)
1993-01-01
Jack is an advanced human factors software package that provides a three dimensional model for predicting how a human will interact with a given system or environment. It can be used for a broad range of computer-aided design applications. Jack was developed by the computer Graphics Research Laboratory of the University of Pennsylvania with assistance from NASA's Johnson Space Center, Ames Research Center and the Army. It is the University's first commercial product. Jack is still used for academic purposes at the University of Pennsylvania. Commercial rights were given to Transom Technologies, Inc.
Assess program: Interactive data management systems for airborne research
NASA Technical Reports Server (NTRS)
Munoz, R. M.; Reller, J. O., Jr.
1974-01-01
Two data systems were developed for use in airborne research. Both have distributed intelligence and are programmed for interactive support among computers and with human operators. The C-141 system (ADAMS) performs flight planning and telescope control functions in addition to its primary role of data acquisition; the CV-990 system (ADDAS) performs data management functions in support of many research experiments operating concurrently. Each system is arranged for maximum reliability in the first priority function, precision data acquisition.
STEPS: A Simulated, Tutorable Physics Student.
ERIC Educational Resources Information Center
Ur, Sigalit; VanLehn, Kurt
1995-01-01
Describes a simulated student that learns by interacting with a human tutor. Tests suggest that simulated students, when developed past the prototype stage, could be valuable for training human tutors. Provides a computational cognitive task analysis of the skill of learning from a tutor that is useful for designing intelligent tutoring systems.…
HExpoChem: a systems biology resource to explore human exposure to chemicals.
Taboureau, Olivier; Jacobsen, Ulrik Plesner; Kalhauge, Christian; Edsgärd, Daniel; Rigina, Olga; Gupta, Ramneek; Audouze, Karine
2013-05-01
Humans are exposed to diverse hazardous chemicals daily. Although an exposure to these chemicals is suspected to have adverse effects on human health, mechanistic insights into how they interact with the human body are still limited. Therefore, acquisition of curated data and development of computational biology approaches are needed to assess the health risks of chemical exposure. Here we present HExpoChem, a tool based on environmental chemicals and their bioactivities on human proteins with the objective of aiding the qualitative exploration of human exposure to chemicals. The chemical-protein interactions have been enriched with a quality-scored human protein-protein interaction network, a protein-protein association network and a chemical-chemical interaction network, thus allowing the study of environmental chemicals through formation of protein complexes and phenotypic outcomes enrichment. HExpoChem is available at http://www.cbs.dtu.dk/services/HExpoChem-1.0/.
2010-12-01
Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An
The Further Development of CSIEC Project Driven by Application and Evaluation in English Education
ERIC Educational Resources Information Center
Jia, Jiyou; Chen, Weichao
2009-01-01
In this paper, we present the comprehensive version of CSIEC (Computer Simulation in Educational Communication), an interactive web-based human-computer dialogue system with natural language for English instruction, and its tentative application and evaluation in English education. First, we briefly introduce the motivation for this project,…
A multimodal dataset for authoring and editing multimedia content: The MAMEM project.
Nikolopoulos, Spiros; Petrantonakis, Panagiotis C; Georgiadis, Kostas; Kalaganis, Fotis; Liaros, Georgios; Lazarou, Ioulietta; Adam, Katerina; Papazoglou-Chalikias, Anastasios; Chatzilari, Elisavet; Oikonomou, Vangelis P; Kumar, Chandan; Menges, Raphael; Staab, Steffen; Müller, Daniel; Sengupta, Korok; Bostantjopoulou, Sevasti; Katsarou, Zoe; Zeilig, Gabi; Plotnik, Meir; Gotlieb, Amihai; Kizoni, Racheli; Fountoukidou, Sofia; Ham, Jaap; Athanasiou, Dimitrios; Mariakaki, Agnes; Comanducci, Dario; Sabatini, Edoardo; Nistico, Walter; Plank, Markus; Kompatsiaris, Ioannis
2017-12-01
We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
Developing Visualization Techniques for Semantics-based Information Networks
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Hall, David R.
2003-01-01
Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.
Web-based interactive drone control using hand gesture
NASA Astrophysics Data System (ADS)
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
Web-based interactive drone control using hand gesture.
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.
ERIC Educational Resources Information Center
Deaudelin, Colette; Dussault, Marc; Brodeur, Monique
2003-01-01
Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…
OFMspert: An architecture for an operator's associate that evolves to an intelligent tutor
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1991-01-01
With the emergence of new technology for both human-computer interaction and knowledge-based systems, a range of opportunities exist which enhance the effectiveness and efficiency of controllers of high-risk engineering systems. The design of an architecture for an operator's associate is described. This associate is a stand-alone model-based system designed to interact with operators of complex dynamic systems, such as airplanes, manned space systems, and satellite ground control systems in ways comparable to that of a human assistant. The operator function model expert system (OFMspert) architecture and the design and empirical validation of OFMspert's understanding component are described. The design and validation of OFMspert's interactive and control components are also described. A description of current work in which OFMspert provides the foundation in the development of an intelligent tutor that evolves to an assistant, as operator expertise evolves from novice to expert, is provided.
Shared resource control between human and computer
NASA Technical Reports Server (NTRS)
Hendler, James; Wilson, Reid
1989-01-01
The advantages of an AI system of actively monitoring human control of a shared resource (such as a telerobotic manipulator) are presented. A system is described in which a simple AI planning program gains efficiency by monitoring human actions and recognizing when the actions cause a change in the system's assumed state of the world. This enables the planner to recognize when an interaction occurs between human actions and system goals, and allows maintenance of an up-to-date knowledge of the state of the world and thus informs the operator when human action would undo a goal achieved by the system, when an action would render a system goal unachievable, and efficiently replans the establishment of goals after human intervention.
Psychological Issues in Online Adaptive Task Allocation
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.
1984-01-01
Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.
Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.
Zander, Thorsten O; Krol, Laurens R; Birbaumer, Niels P; Gramann, Klaus
2016-12-27
The effectiveness of today's human-machine interaction is limited by a communication bottleneck as operators are required to translate high-level concepts into a machine-mandated sequence of instructions. In contrast, we demonstrate effective, goal-oriented control of a computer system without any form of explicit communication from the human operator. Instead, the system generated the necessary input itself, based on real-time analysis of brain activity. Specific brain responses were evoked by violating the operators' expectations to varying degrees. The evoked brain activity demonstrated detectable differences reflecting congruency with or deviations from the operators' expectations. Real-time analysis of this activity was used to build a user model of those expectations, thus representing the optimal (expected) state as perceived by the operator. Based on this model, which was continuously updated, the computer automatically adapted itself to the expectations of its operator. Further analyses showed this evoked activity to originate from the medial prefrontal cortex and to exhibit a linear correspondence to the degree of expectation violation. These findings extend our understanding of human predictive coding and provide evidence that the information used to generate the user model is task-specific and reflects goal congruency. This paper demonstrates a form of interaction without any explicit input by the operator, enabling computer systems to become neuroadaptive, that is, to automatically adapt to specific aspects of their operator's mindset. Neuroadaptive technology significantly widens the communication bottleneck and has the potential to fundamentally change the way we interact with technology.
NASA Astrophysics Data System (ADS)
Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.
2000-08-01
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems
2015-05-19
algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with
Sun, Huey-Min; Li, Shang-Phone; Zhu, Yu-Qian; Hsiao, Bo
2015-09-01
Technological advance in human-computer interaction has attracted increasing research attention, especially in the field of virtual reality (VR). Prior research has focused on examining the effects of VR on various outcomes, for example, learning and health. However, which factors affect the final outcomes? That is, what kind of VR system design will achieve higher usability? This question remains largely. Furthermore, when we look at VR system deployment from a human-computer interaction (HCI) lens, does user's attitude play a role in achieving the final outcome? This study aims to understand the effect of immersion and involvement, as well as users' regulatory focus on usability for a somatosensory VR learning system. This study hypothesized that regulatory focus and presence can effectively enhance user's perceived usability. Survey data from 78 students in Taiwan indicated that promotion focus is positively related to user's perceived efficiency, whereas involvement and promotion focus are positively related to user's perceived effectiveness. Promotion focus also predicts user satisfaction and overall usability perception. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Holistic Modeling for Human-Autonomous System Interaction
2015-01-01
piloting ...2012). 18X Pilots Learn RPAs First. Retrieved April 7, 2013, from http://www.holloman.af.mil/news/story.asp...human processor (QN-‐ MHP): a computational architecture for multitask performance in human-‐machine
Interactive Relationships with Computers in Teaching Reading.
ERIC Educational Resources Information Center
Doublier, Rene M.
This study summarizes recent achievements in the expanding development of man/machine communications and reviews current technological hurdles associated with the development of artificial intelligence systems which can generate and recognize human speech patterns. With the development of such systems, one potential application would be the…
Facing the challenges of multiscale modelling of bacterial and fungal pathogen–host interactions
Schleicher, Jana; Conrad, Theresia; Gustafsson, Mika; Cedersund, Gunnar; Guthke, Reinhard
2017-01-01
Abstract Recent and rapidly evolving progress on high-throughput measurement techniques and computational performance has led to the emergence of new disciplines, such as systems medicine and translational systems biology. At the core of these disciplines lies the desire to produce multiscale models: mathematical models that integrate multiple scales of biological organization, ranging from molecular, cellular and tissue models to organ, whole-organism and population scale models. Using such models, hypotheses can systematically be tested. In this review, we present state-of-the-art multiscale modelling of bacterial and fungal infections, considering both the pathogen and host as well as their interaction. Multiscale modelling of the interactions of bacteria, especially Mycobacterium tuberculosis, with the human host is quite advanced. In contrast, models for fungal infections are still in their infancy, in particular regarding infections with the most important human pathogenic fungi, Candida albicans and Aspergillus fumigatus. We reflect on the current availability of computational approaches for multiscale modelling of host–pathogen interactions and point out current challenges. Finally, we provide an outlook for future requirements of multiscale modelling. PMID:26857943
Predicting human activities in sequences of actions in RGB-D videos
NASA Astrophysics Data System (ADS)
Jardim, David; Nunes, Luís.; Dias, Miguel
2017-03-01
In our daily activities we perform prediction or anticipation when interacting with other humans or with objects. Prediction of human activity made by computers has several potential applications: surveillance systems, human computer interfaces, sports video analysis, human-robot-collaboration, games and health-care. We propose a system capable of recognizing and predicting human actions using supervised classifiers trained with automatically labeled data evaluated in our human activity RGB-D dataset (recorded with a Kinect sensor) and using only the position of the main skeleton joints to extract features. Using conditional random fields (CRFs) to model the sequential nature of actions in a sequence has been used before, but where other approaches try to predict an outcome or anticipate ahead in time (seconds), we try to predict what will be the next action of a subject. Our results show an activity prediction accuracy of 89.9% using an automatically labeled dataset.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-01-01
Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712
Human-Robot Teams for Unknown and Uncertain Environments
NASA Technical Reports Server (NTRS)
Fong, Terry
2015-01-01
Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.
Safety Analysis of FMS/CTAS Interactions During Aircraft Arrivals
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.
1998-01-01
This grant funded research on human-computer interaction design and analysis techniques, using future ATC environments as a testbed. The basic approach was to model the nominal behavior of both the automated and human procedures and then to apply safety analysis techniques to these models. Our previous modeling language, RSML, had been used to specify the system requirements for TCAS II for the FAA. Using the lessons learned from this experience, we designed a new modeling language that (among other things) incorporates features to assist in designing less error-prone human-computer interactions and interfaces and in detecting potential HCI problems, such as mode confusion. The new language, SpecTRM-RL, uses "intent" abstractions, based on Rasmussen's abstraction hierarchy, and includes both informal (English and graphical) specifications and formal, executable models for specifying various aspects of the system. One of the goals for our language was to highlight the system modes and mode changes to assist in identifying the potential for mode confusion. Three published papers resulted from this research. The first builds on the work of Degani on mode confusion to identify aspects of the system design that could lead to potential hazards. We defined and modeled modes differently than Degani and also defined design criteria for SpecTRM-RL models. Our design criteria include the Degani criteria but extend them to include more potential problems. In a second paper, Leveson and Palmer showed how the criteria for indirect mode transitions could be applied to a mode confusion problem found in several ASRS reports for the MD-88. In addition, we defined a visual task modeling language that can be used by system designers to model human-computer interaction. The visual models can be translated into SpecTRM-RL models, and then the SpecTRM-RL suite of analysis tools can be used to perform formal and informal safety analyses on the task model in isolation or integrated with the rest of the modeled system. We had hoped to be able to apply these modeling languages and analysis tools to a TAP air/ground trajectory negotiation scenario, but the development of the tools took more time than we anticipated.
Rana computatrix to human language: towards a computational neuroethology of language evolution.
Arbib, Michael A
2003-10-15
Walter's Machina speculatrix inspired the name Rana computatrix for a family of models of visuomotor coordination in the frog, which contributed to the development of computational neuroethology. We offer here an 'evolutionary' perspective on models in the same tradition for rat, monkey and human. For rat, we show how the frog-like taxon affordance model provides a basis for the spatial navigation mechanisms that involve the hippocampus and other brain regions. For monkey, we recall two models of neural mechanisms for visuomotor coordination. The first, for saccades, shows how interactions between the parietal and frontal cortex augment superior colliculus seen as the homologue of frog tectum. The second, for grasping, continues the theme of parieto-frontal interactions, linking parietal affordances to motor schemas in premotor cortex. It further emphasizes the mirror system for grasping, in which neurons are active both when the monkey executes a specific grasp and when it observes a similar grasp executed by others. The model of human-brain mechanisms is based on the mirror-system hypothesis of the evolution of the language-ready brain, which sees the human Broca's area as an evolved extension of the mirror system for grasping.
Simulating the decentralized processes of the human immune system in a virtual anatomy model.
Sarpe, Vladimir; Jacob, Christian
2013-01-01
Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.
Rethinking Human-Centered Computing: Finding the Customer and Negotiated Interactions at the Airport
NASA Technical Reports Server (NTRS)
Wales, Roxana; O'Neill, John; Mirmalek, Zara
2003-01-01
The breakdown in the air transportation system over the past several years raises an interesting question for researchers: How can we help improve the reliability of airline operations? In offering some answers to this question, we make a statement about Huuman-Centered Computing (HCC). First we offer the definition that HCC is a multi-disciplinary research and design methodology focused on supporting humans as they use technology by including cognitive and social systems, computational tools and the physical environment in the analysis of organizational systems. We suggest that a key element in understanding organizational systems is that there are external cognitive and social systems (customers) as well as internal cognitive and social systems (employees) and that they interact dynamically to impact the organization and its work. The design of human-centered intelligent systems must take this outside-inside dynamic into account. In the past, the design of intelligent systems has focused on supporting the work and improvisation requirements of employees but has often assumed that customer requirements are implicitly satisfied by employee requirements. Taking a customer-centric perspective provides a different lens for understanding this outside-inside dynamic, the work of the organization and the requirements of both customers and employees In this article we will: 1) Demonstrate how the use of ethnographic methods revealed the important outside-inside dynamic in an airline, specifically the consequential relationship between external customer requirements and perspectives and internal organizational processes and perspectives as they came together in a changing environment; 2) Describe how taking a customer centric perspective identifies places where the impact of the outside-inside dynamic is most critical and requires technology that can be adaptive; 3) Define and discuss the place of negotiated interactions in airline operations, identifying how these interactions between customers and airline employees provided new insights into design problems in the airline system; 4) Show how taking a customer-centric perspective influences the HCC design of an airline system and make recommendations for new architectures and intelligent devices that will enable airline systems to adapt flexibly to delay situations, supporting both customers and airline employees.
Modeling the Emergence of Lexicons in Homesign Systems
Richie, Russell; Yang, Charles; Coppola, Marie
2014-01-01
It is largely acknowledged that natural languages emerge from not just human brains, but also from rich communities of interacting human brains (Senghas, 2005). Yet the precise role of such communities and such interaction in the emergence of core properties of language has largely gone uninvestigated in naturally emerging systems, leaving the few existing computational investigations of this issue at an artificial setting. Here we take a step towards investigating the precise role of community structure in the emergence of linguistic conventions with both naturalistic empirical data and computational modeling. We first show conventionalization of lexicons in two different classes of naturally emerging signed systems: (1) protolinguistic “homesigns” invented by linguistically isolated Deaf individuals, and (2) a natural sign language emerging in a recently formed rich Deaf community. We find that the latter conventionalized faster than the former. Second, we model conventionalization as a population of interacting individuals who adjust their probability of sign use in response to other individuals' actual sign use, following an independently motivated model of language learning (Yang 2002, 2004). Simulations suggest that a richer social network, like that of natural (signed) languages, conventionalizes faster than a sparser social network, like that of homesign systems. We discuss our behavioral and computational results in light of other work on language emergence, and other work of behavior on complex networks. PMID:24482343
Human interaction with wearable computer systems: a look at glasses-mounted displays
NASA Astrophysics Data System (ADS)
Revels, Allen R.; Quill, Laurie L.; Kancler, David E.; Masquelier, Barbara L.
1998-09-01
With the advancement of technology and the information explosion, integration of the two into performance aiding systems can have a significant impact on operational and maintenance environments. The Department of Defense and commercial industry have made great strides in digitizing and automating technical manuals and data to be presented on performance aiding systems. These performance aides are computerized interactive systems that provide procedures on how to operate and maintain fielded systems. The idea is to provide the end-user a system which is compatible with their work environment. The purpose of this paper is to show, historically, the progression of wearable computer aiding systems for maintenance environments, and then highlight the work accomplished in the design and development of glasses- mounted displays (GMD). The paper reviews work performed over the last seven years, then highlights, through review of a usability study, the advances made with GMDs. The use of portable computing systems, such as laptop and notebook, computers, does not necessarily increase the accessibility of the displayed information while accomplishing a given task in a hands-busy, mobile work environment. The use of a GMD increases accessibility of the information by placing it in eye sight of the user without obstructing the surrounding environment. Although the potential utility for this type of display is great, hardware and human integration must be refined. Results from the usability study show the usefulness and usability of the GMD in a mobile, hands-free environment.
Live interactive computer music performance practice
NASA Astrophysics Data System (ADS)
Wessel, David
2002-05-01
A live-performance musical instrument can be assembled around current lap-top computer technology. One adds a controller such as a keyboard or other gestural input device, a sound diffusion system, some form of connectivity processor(s) providing for audio I/O and gestural controller input, and reactive real-time native signal processing software. A system consisting of a hand gesture controller; software for gesture analysis and mapping, machine listening, composition, and sound synthesis; and a controllable radiation pattern loudspeaker are described. Interactivity begins in the set up wherein the speaker-room combination is tuned with an LMS procedure. This system was designed for improvisation. It is argued that software suitable for carrying out an improvised musical dialog with another performer poses special challenges. The processes underlying the generation of musical material must be very adaptable, capable of rapid changes in musical direction. Machine listening techniques are used to help the performer adapt to new contexts. Machine learning can play an important role in the development of such systems. In the end, as with any musical instrument, human skill is essential. Practice is required not only for the development of musically appropriate human motor programs but for the adaptation of the computer-based instrument as well.
Focus Your Young Visitors: Kids Innovation--Fundamental Changes in Digital Edutainment.
ERIC Educational Resources Information Center
Sauer, Sebastian; Gobel, Stefan
With regard to the acceptance of human-computer interfaces, immersion represents one of the most important methods for attracting young visitors into museum exhibitions. Exciting and diversely presented content as well as intuitive, natural and human-like interfaces are indispensable to bind users to an interactive system with real and digital…
Mala, S.; Latha, K.
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185
Mala, S; Latha, K
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.
Interactive lung segmentation in abnormal human and animal chest CT scans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kockelkorn, Thessa T. J. P., E-mail: thessa@isi.uu.nl; Viergever, Max A.; Schaefer-Prokop, Cornelia M.
2014-08-15
Purpose: Many medical image analysis systems require segmentation of the structures of interest as a first step. For scans with gross pathology, automatic segmentation methods may fail. The authors’ aim is to develop a versatile, fast, and reliable interactive system to segment anatomical structures. In this study, this system was used for segmenting lungs in challenging thoracic computed tomography (CT) scans. Methods: In volumetric thoracic CT scans, the chest is segmented and divided into 3D volumes of interest (VOIs), containing voxels with similar densities. These VOIs are automatically labeled as either lung tissue or nonlung tissue. The automatic labeling resultsmore » can be corrected using an interactive or a supervised interactive approach. When using the supervised interactive system, the user is shown the classification results per slice, whereupon he/she can adjust incorrect labels. The system is retrained continuously, taking the corrections and approvals of the user into account. In this way, the system learns to make a better distinction between lung tissue and nonlung tissue. When using the interactive framework without supervised learning, the user corrects all incorrectly labeled VOIs manually. Both interactive segmentation tools were tested on 32 volumetric CT scans of pigs, mice and humans, containing pulmonary abnormalities. Results: On average, supervised interactive lung segmentation took under 9 min of user interaction. Algorithm computing time was 2 min on average, but can easily be reduced. On average, 2.0% of all VOIs in a scan had to be relabeled. Lung segmentation using the interactive segmentation method took on average 13 min and involved relabeling 3.0% of all VOIs on average. The resulting segmentations correspond well to manual delineations of eight axial slices per scan, with an average Dice similarity coefficient of 0.933. Conclusions: The authors have developed two fast and reliable methods for interactive lung segmentation in challenging chest CT images. Both systems do not require prior knowledge of the scans under consideration and work on a variety of scans.« less
Cybernetic anthropomorphic machine systems
NASA Technical Reports Server (NTRS)
Gray, W. E.
1974-01-01
Functional descriptions are provided for a number of cybernetic man machine systems that augment the capacity of normal human beings in the areas of strength, reach or physical size, and environmental interaction, and that are also applicable to aiding the neurologically handicapped. Teleoperators, computer control, exoskeletal devices, quadruped vehicles, space maintenance systems, and communications equipment are considered.
From 'automation' to 'autonomy': the importance of trust repair in human-machine interaction.
de Visser, Ewart J; Pak, Richard; Shaw, Tyler H
2018-04-09
Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.
Nourani, Esmaeil; Khunjush, Farshad; Durmuş, Saliha
2016-05-24
Pathogenic microorganisms exploit host cellular mechanisms and evade host defense mechanisms through molecular pathogen-host interactions (PHIs). Therefore, comprehensive analysis of these PHI networks should be an initial step for developing effective therapeutics against infectious diseases. Computational prediction of PHI data is gaining increasing demand because of scarcity of experimental data. Prediction of protein-protein interactions (PPIs) within PHI systems can be formulated as a classification problem, which requires the knowledge of non-interacting protein pairs. This is a restricting requirement since we lack datasets that report non-interacting protein pairs. In this study, we formulated the "computational prediction of PHI data" problem using kernel embedding of heterogeneous data. This eliminates the abovementioned requirement and enables us to predict new interactions without randomly labeling protein pairs as non-interacting. Domain-domain associations are used to filter the predicted results leading to 175 novel PHIs between 170 human proteins and 105 viral proteins. To compare our results with the state-of-the-art studies that use a binary classification formulation, we modified our settings to consider the same formulation. Detailed evaluations are conducted and our results provide more than 10 percent improvements for accuracy and AUC (area under the receiving operating curve) results in comparison with state-of-the-art methods.
Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy
ERIC Educational Resources Information Center
Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean
2007-01-01
In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…
Activating Humans with Humor——A Dialogue System That Users Want to Interact with
NASA Astrophysics Data System (ADS)
Dybala, Pawel; Ptaszynski, Michal; Rzepka, Rafal; Araki, Kenji
The topic of Human Computer Interaction (HCI) has been gathering more and more scientific attention of late. A very important, but often undervalued area in this field is human engagement. That is, a person's commitment to take part in and continue the interaction. In this paper we describe work on a humor-equipped casual conversational system (chatterbot) and investigate the effect of humor on a user's engagement in the conversation. A group of users was made to converse with two systems: one with and one without humor. The chat logs were then analyzed using an emotive analysis system to check user reactions and attitudes towards each system. Results were projected on Russell's two-dimensional emotiveness space to evaluate the positivity/negativity and activation/deactivation of these emotions. This analysis indicated emotions elicited by the humor-equipped system were more positively active and less negatively active than by the system without humor. The implications of results and relation between them and user engagement in the conversation are discussed. We also propose a distinction between positive and negative engagement.
Prosodic alignment in human-computer interaction
NASA Astrophysics Data System (ADS)
Suzuki, N.; Katagiri, Y.
2007-06-01
Androids that replicate humans in form also need to replicate them in behaviour to achieve a high level of believability or lifelikeness. We explore the minimal social cues that can induce in people the human tendency for social acceptance, or ethopoeia, toward artifacts, including androids. It has been observed that people exhibit a strong tendency to adjust to each other, through a number of speech and language features in human-human conversational interactions, to obtain communication efficiency and emotional engagement. We investigate in this paper the phenomena related to prosodic alignment in human-computer interactions, with particular focus on human-computer alignment of speech characteristics. We found that people exhibit unidirectional and spontaneous short-term alignment of loudness and response latency in their speech in response to computer-generated speech. We believe this phenomenon of prosodic alignment provides one of the key components for building social acceptance of androids.
CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas
2015-09-01
Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less
The Study on Human-Computer Interaction Design Based on the Users’ Subconscious Behavior
NASA Astrophysics Data System (ADS)
Li, Lingyuan
2017-09-01
Human-computer interaction is human-centered. An excellent interaction design should focus on the study of user experience, which greatly comes from the consistence between design and human behavioral habit. However, users’ behavioral habits often result from subconsciousness. Therefore, it is smart to utilize users’ subconscious behavior to achieve design's intention and maximize the value of products’ functions, which gradually becomes a new trend in this field.
Glowacki, David R; O'Connor, Michael; Calabró, Gaetano; Price, James; Tew, Philip; Mitchell, Thomas; Hyde, Joseph; Tew, David P; Coughtrie, David J; McIntosh-Smith, Simon
2014-01-01
With advances in computational power, the rapidly growing role of computational/simulation methodologies in the physical sciences, and the development of new human-computer interaction technologies, the field of interactive molecular dynamics seems destined to expand. In this paper, we describe and benchmark the software algorithms and hardware setup for carrying out interactive molecular dynamics utilizing an array of consumer depth sensors. The system works by interpreting the human form as an energy landscape, and superimposing this landscape on a molecular dynamics simulation to chaperone the motion of the simulated atoms, affecting both graphics and sonified simulation data. GPU acceleration has been key to achieving our target of 60 frames per second (FPS), giving an extremely fluid interactive experience. GPU acceleration has also allowed us to scale the system for use in immersive 360° spaces with an array of up to ten depth sensors, allowing several users to simultaneously chaperone the dynamics. The flexibility of our platform for carrying out molecular dynamics simulations has been considerably enhanced by wrappers that facilitate fast communication with a portable selection of GPU-accelerated molecular force evaluation routines. In this paper, we describe a 360° atmospheric molecular dynamics simulation we have run in a chemistry/physics education context. We also describe initial tests in which users have been able to chaperone the dynamics of 10-alanine peptide embedded in an explicit water solvent. Using this system, both expert and novice users have been able to accelerate peptide rare event dynamics by 3-4 orders of magnitude.
ERIC Educational Resources Information Center
Hung, Wei-Chen; Smith, Thomas J.; Harris, Marian S.; Lockard, James
2010-01-01
This study adopted design and development research methodology (Richey & Klein, "Design and development research: Methods, strategies, and issues," 2007) to systematically investigate the process of applying instructional design principles, human-computer interaction, and software engineering to a performance support system (PSS) for behavior…
How to Build Bridges between Intelligent Tutoring System Subfields of Research
ERIC Educational Resources Information Center
Pavlik, Philip, Jr.; Toth, Joe
2010-01-01
The plethora of different subfields in intelligent tutoring systems (ITS) are often difficult to integrate theoretically when analyzing how to design an intelligent tutor. Important principles of design are claimed by many subfields, including but not limited to: design, human-computer interaction, perceptual psychology, cognitive psychology,…
Has computational creativity successfully made it "Beyond the Fence" in musical theatre?
NASA Astrophysics Data System (ADS)
Jordanous, Anna
2017-10-01
A significant test for software is to task it with replicating human performance, as done recently with creative software and the commercial project Beyond the Fence (undertaken for a television documentary Computer Says Show). The remit of this project was to use computer software as much as possible to produce "the world's first computer-generated musical". Several creative systems were used to generate this musical, which was performed in London's West End in 2016. This paper considers the challenge of evaluating this project. Current computational creativity evaluation methods are ill-suited to evaluating projects that involve creative input from multiple systems and people. Following recent inspiration within computational creativity research from interaction design, here the DECIDE evaluation framework is applied to evaluate the Beyond the Fence project. Evaluation finds that the project was reasonably successful at achieving the task of using computational generation to produce a credible musical. Lessons have been learned for future computational creativity projects though, particularly for affording creative software more agency and enabling software to interact with other creative partners. Upon reflection, the DECIDE framework emerges as a useful evaluation "checklist" (if not a tangible operational methodology) for evaluating multiple creative systems participating in a creative task.
The Human-Computer Interaction of Cross-Cultural Gaming Strategy
ERIC Educational Resources Information Center
Chakraborty, Joyram; Norcio, Anthony F.; Van Der Veer, Jacob J.; Andre, Charles F.; Miller, Zachary; Regelsberger, Alexander
2015-01-01
This article explores the cultural dimensions of the human-computer interaction that underlies gaming strategies. The article is a desktop study of existing literature and is organized into five sections. The first examines the cultural aspects of knowledge processing. The social constructs technology interaction is discussed. Following this, the…
Evaluation of an eye-pointer interaction device for human-computer interaction.
Cáceres, Enrique; Carrasco, Miguel; Ríos, Sebastián
2018-03-01
Advances in eye-tracking technology have led to better human-computer interaction, and involve controlling a computer without any kind of physical contact. This research describes the transformation of a commercial eye-tracker for use as an alternative peripheral device in human-computer interactions, implementing a pointer that only needs the eye movements of a user facing a computer screen, thus replacing the need to control the software by hand movements. The experiment was performed with 30 test individuals who used the prototype with a set of educational videogames. The results show that, although most of the test subjects would prefer a mouse to control the pointer, the prototype tested has an empirical precision similar to that of the mouse, either when trying to control its movements or when attempting to click on a point of the screen.
Low-Dimensional Models for Physiological Systems: Nonlinear Coupling of Gas and Liquid Flows
NASA Astrophysics Data System (ADS)
Staples, A. E.; Oran, E. S.; Boris, J. P.; Kailasanath, K.
2006-11-01
Current computational models of biological organisms focus on the details of a specific component of the organism. For example, very detailed models of the human heart, an aorta, a vein, or part of the respiratory or digestive system, are considered either independently from the rest of the body, or as interacting simply with other systems and components in the body. In actual biological organisms, these components and systems are strongly coupled and interact in complex, nonlinear ways leading to complicated global behavior. Here we describe a low-order computational model of two physiological systems, based loosely on a circulatory and respiratory system. Each system is represented as a one-dimensional fluid system with an interconnected series of mass sources, pumps, valves, and other network components, as appropriate, representing different physical organs and system components. Preliminary results from a first version of this model system are presented.
Tanaka, Hiroki; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi
2017-01-01
Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system's effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime.
Ubiquitous Wireless Smart Sensing and Control
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston
NASA Technical Reports Server (NTRS)
Wagner, Raymond
2013-01-01
Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.
Visual analytics as a translational cognitive science.
Fisher, Brian; Green, Tera Marie; Arias-Hernández, Richard
2011-07-01
Visual analytics is a new interdisciplinary field of study that calls for a more structured scientific approach to understanding the effects of interaction with complex graphical displays on human cognitive processes. Its primary goal is to support the design and evaluation of graphical information systems that better support cognitive processes in areas as diverse as scientific research and emergency management. The methodologies that make up this new field are as yet ill defined. This paper proposes a pathway for development of visual analytics as a translational cognitive science that bridges fundamental research in human/computer cognitive systems and design and evaluation of information systems in situ. Achieving this goal will require the development of enhanced field methods for conceptual decomposition of human/computer cognitive systems that maps onto laboratory studies, and improved methods for conducting laboratory investigations that might better map onto real-world cognitive processes in technology-rich environments. Copyright © 2011 Cognitive Science Society, Inc.
When does a physical system compute?
Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv
2014-09-08
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.
When does a physical system compute?
Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv
2014-01-01
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245
ERIC Educational Resources Information Center
McKay, Elspeth; Vilela, Cenie
2011-01-01
The purpose of this paper is to outline government online training practice. We searched individual research domains of the human-dimensions of Human Computer Interaction (HCI), information and communications technologies (ICT) and instructional design for evidence of either corporate sector or government training practices. We overlapped these…
Automatic scanning and measuring using POLLY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fields, T.
1993-07-01
The HPD and PEPR automatic measuring systems, which have been described by B. Powell and I. Pless at this conference, were developed in the 1960`s to be used for what would now be called {open_quotes}batch processing.{close_quotes} That is, an entire reel of bubble chamber film containing interesting events whose tracks had been rough-digitized would be processed in an extended run by a dedicated computer/precision digitizer hardware system, with no human intervention. Then, at a later time, events for which the precision measurement did not appear to be successful would be handled with some type of {open_quotes}fixup{close_quotes} station or process. Bymore » contrast, the POLLY system included from the start, not only a computer and a precision CRT measuring device, but also a human operator who could have convenient two-way interactions with the computer and could also view the picture directly. Inclusion of a human as a key part of the system had some important beneficial effects, as has been described in the original papers. In this note the author summarizes those effects, and also points out connections between the POLLY system philosophy and subsequent developments in both high energy physics data analysis and computing systems.« less
High-performance biocomputing for simulating the spread of contagion over large contact networks
2012-01-01
Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Design Science in Human-Computer Interaction: A Model and Three Examples
ERIC Educational Resources Information Center
Prestopnik, Nathan R.
2013-01-01
Humanity has entered an era where computing technology is virtually ubiquitous. From websites and mobile devices to computers embedded in appliances on our kitchen counters and automobiles parked in our driveways, information and communication technologies (ICTs) and IT artifacts are fundamentally changing the ways we interact with our world.…
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
The 'Biologically-Inspired Computing' Column
NASA Technical Reports Server (NTRS)
Hinchey, Mike
2006-01-01
The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
Innovations in healthcare information systems promise to revolutionize and streamline healthcare processes worldwide. However, the complexity of these systems and the need to better understand issues related to human-computer interaction have slowed progress in this area. In this chapter the authors describe their work in using methods adapted from usability engineering, video ethnography and analysis of digital log files for improving our understanding of complex real-world healthcare interactions between humans and technology. The approaches taken are cost-effective and practical and can provide detailed ethnographic data on issues health professionals and consumers encounter while using systems as well as potential safety problems. The work is important in that it can be used in techno-anthropology to characterize complex user interactions with technologies and also to provide feedback into redesign and optimization of improved healthcare information systems.
Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi
2017-01-01
Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system’s effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime. PMID:28796781
Toward the Language-Ready Brain: Biological Evolution and Primate Comparisons.
Arbib, Michael A
2017-02-01
The approach to language evolution suggested here focuses on three questions: How did the human brain evolve so that humans can develop, use, and acquire languages? How can the evolutionary quest be informed by studying brain, behavior, and social interaction in monkeys, apes, and humans? How can computational modeling advance these studies? I hypothesize that the brain is language ready in that the earliest humans had protolanguages but not languages (i.e., communication systems endowed with rich and open-ended lexicons and grammars supporting a compositional semantics), and that it took cultural evolution to yield societies (a cultural constructed niche) in which language-ready brains could become language-using brains. The mirror system hypothesis is a well-developed example of this approach, but I offer it here not as a closed theory but as an evolving framework for the development and analysis of conflicting subhypotheses in the hope of their eventual integration. I also stress that computational modeling helps us understand the evolving role of mirror neurons, not in and of themselves, but only in their interaction with systems "beyond the mirror." Because a theory of evolution needs a clear characterization of what it is that evolved, I also outline ideas for research in neurolinguistics to complement studies of the evolution of the language-ready brain. A clear challenge is to go beyond models of speech comprehension to include sign language and models of production, and to link language to visuomotor interaction with the physical and social world.
Pedagogical Agents as Learning Companions: The Impact of Agent Emotion and Gender
ERIC Educational Resources Information Center
Kim, Yanghee; Baylor, A. L.; Shen, E.
2007-01-01
The potential of emotional interaction between human and computer has recently interested researchers in human-computer interaction. The instructional impact of this interaction in learning environments has not been established, however. This study examined the impact of emotion and gender of a pedagogical agent as a learning companion (PAL) on…
NASA Technical Reports Server (NTRS)
Corker, Kevin M.; Labacqz, J. Victor (Technical Monitor)
1997-01-01
The Man-Machine Interaction Design and Analysis System (MIDAS) under joint U.S. Army and NASA cooperative is intended to assist designers of complex human/automation systems in successfully incorporating human performance capabilities and limitations into decision and action support systems. MIDAS is a computational representation of multiple human operators, selected perceptual, cognitive, and physical functions of those operators, and the physical/functional representation of the equipment with which they operate. MIDAS has been used as an integrated predictive framework for the investigation of human/machine systems, particularly in situations with high demands on the operators. We have extended the human performance models to include representation of both human operators and intelligent aiding systems in flight management, and air traffic service. The focus of this development is to predict human performance in response to aiding system developed to identify aircraft conflict and to assist in the shared authority for resolution. The demands of this application requires representation of many intelligent agents sharing world-models, coordinating action/intention, and cooperative scheduling of goals and action in an somewhat unpredictable world of operations. In recent applications to airborne systems development, MIDAS has demonstrated an ability to predict flight crew decision-making and procedural behavior when interacting with automated flight management systems and Air Traffic Control. In this paper, we describe two enhancements to MIDAS. The first involves the addition of working memory in the form of an articulatory buffer for verbal communication protocols and a visuo-spatial buffer for communications via digital datalink. The second enhancement is a representation of multiple operators working as a team. This enhanced model was used to predict the performance of human flight crews and their level of compliance with commercial aviation communication procedures. We show how the data produced by MIDAS compares with flight crew performance data from full mission simulations. Finally, we discuss the use of these features to study communication issues connected with aircraft-based separation assurance.
Exploring host–microbiota interactions in animal models and humans
Kostic, Aleksandar D.; Howitt, Michael R.; Garrett, Wendy S.
2013-01-01
The animal and bacterial kingdoms have coevolved and coadapted in response to environmental selective pressures over hundreds of millions of years. The meta'omics revolution in both sequencing and its analytic pipelines is fostering an explosion of interest in how the gut microbiome impacts physiology and propensity to disease. Gut microbiome studies are inherently interdisciplinary, drawing on approaches and technical skill sets from the biomedical sciences, ecology, and computational biology. Central to unraveling the complex biology of environment, genetics, and microbiome interaction in human health and disease is a deeper understanding of the symbiosis between animals and bacteria. Experimental model systems, including mice, fish, insects, and the Hawaiian bobtail squid, continue to provide critical insight into how host–microbiota homeostasis is constructed and maintained. Here we consider how model systems are influencing current understanding of host–microbiota interactions and explore recent human microbiome studies. PMID:23592793
Collins, Anne G E; Frank, Michael J
2018-03-06
Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations.
Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
Learning gestures for customizable human-computer interaction in the operating room.
Schwarz, Loren Arthur; Bigdelou, Ali; Navab, Nassir
2011-01-01
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
NASA Astrophysics Data System (ADS)
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s-1. Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
The Vesalius Project: Interactive Computers in Anatomical Instruction.
ERIC Educational Resources Information Center
McCracken, Thomas O.; Spurgeon, Thomas L.
1991-01-01
Described is a high-resolution, interactive 3-D atlas of human/animal anatomy that students will use to learn the structure of the body and to understand their own bodies in health and disease. This system can be used to reinforce cadaver study or to serve as a substitute for institutions where it is not practical to use cadavers. (KR)
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals
NASA Astrophysics Data System (ADS)
Lisetti, Christine Lætitia; Nasoz, Fatma
2004-12-01
We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.
Wu, Dongrui; Lance, Brent J; Parsons, Thomas D
2013-01-01
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both k nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.
Wu, Dongrui; Lance, Brent J.; Parsons, Thomas D.
2013-01-01
Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing. PMID:23437188
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Online mentalising investigated with functional MRI.
Kircher, Tilo; Blümel, Isabelle; Marjoram, Dominic; Lataster, Tineke; Krabbendam, Lydia; Weber, Jochen; van Os, Jim; Krach, Sören
2009-05-01
For successful interpersonal communication, inferring intentions, goals or desires of others is highly advantageous. Increasingly, humans also interact with computers or robots. In this study, we sought to determine to what degree an interactive task, which involves receiving feedback from social partners that can be used to infer intent, engaged the medial prefrontal cortex, a region previously associated with Theory of Mind processes among others. Participants were scanned using fMRI as they played an adapted version of the Prisoner's Dilemma Game with alleged human and computer partners who were outside the scanner. The medial frontal cortex was activated when both human and computer partner were played, while the direct contrast revealed significantly stronger signal change during the human-human interaction. The results suggest a link between activity in the medial prefrontal cortex and the partner played in a mentalising task. This signal change was also present for to the computers partner. Implying agency or a will to non-human actors might be an innate human resource that could lead to an evolutionary advantage.
Mewes, André; Hensen, Bennet; Wacker, Frank; Hansen, Christian
2017-02-01
In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction.
Understanding Usefulness in Human-Computer Interaction to Enhance User Experience Evaluation
ERIC Educational Resources Information Center
MacDonald, Craig Matthew
2012-01-01
The concept of usefulness has implicitly played a pivotal role in evaluation research, but the meaning of usefulness has changed over time from system reliability to user performance and learnability/ease of use for non-experts. Despite massive technical and social changes, usability remains the "gold standard" for system evaluation.…
2009-10-02
October. Jansen, B. J., Zhang, M., and Zhang, Y. (2007) Brand Awareness and the Evaluation of Search Results, 16th International World Wide Web...2007) The Effect of Brand Awareness on the Evaluation of Search Engine Results, Conference on Human Factors in Computing Systems (SIGCHI), Work-in
Design of a compact low-power human-computer interaction equipment for hand motion
NASA Astrophysics Data System (ADS)
Wu, Xianwei; Jin, Wenguang
2017-01-01
Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.
Modelling of human-machine interaction in equipment design of manufacturing cells
NASA Astrophysics Data System (ADS)
Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming
2017-08-01
This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.
Methodical and technological aspects of creation of interactive computer learning systems
NASA Astrophysics Data System (ADS)
Vishtak, N. M.; Frolov, D. A.
2017-01-01
The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.
Visual design for the user interface, Part 1: Design fundamentals.
Lynch, P J
1994-01-01
Digital audiovisual media and computer-based documents will be the dominant forms of professional communication in both clinical medicine and the biomedical sciences. The design of highly interactive multimedia systems will shortly become a major activity for biocommunications professionals. The problems of human-computer interface design are intimately linked with graphic design for multimedia presentations and on-line document systems. This article outlines the history of graphic interface design and the theories that have influenced the development of today's major graphic user interfaces.
Systems Biology for Organotypic Cell Cultures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grego, Sonia; Dougherty, Edward R.; Alexander, Francis J.
Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data. This consensus report summarizes the discussions held.« less
Workshop Report: Systems Biology for Organotypic Cell Cultures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph
Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less
Workshop Report: Systems Biology for Organotypic Cell Cultures
Grego, Sonia; Dougherty, Edward R.; Alexander, Francis Joseph; ...
2016-11-14
Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, “organotypic” cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomicmore » data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.« less
Systems biology for organotypic cell cultures.
Grego, Sonia; Dougherty, Edward R; Alexander, Francis J; Auerbach, Scott S; Berridge, Brian R; Bittner, Michael L; Casey, Warren; Cooley, Philip C; Dash, Ajit; Ferguson, Stephen S; Fennell, Timothy R; Hawkins, Brian T; Hickey, Anthony J; Kleensang, Andre; Liebman, Michael N J; Martin, Florian; Maull, Elizabeth A; Paragas, Jason; Qiao, Guilin Gary; Ramaiahgari, Sreenivasa; Sumner, Susan J; Yoon, Miyoung
2017-01-01
Translating in vitro biological data into actionable information related to human health holds the potential to improve disease treatment and risk assessment of chemical exposures. While genomics has identified regulatory pathways at the cellular level, translation to the organism level requires a multiscale approach accounting for intra-cellular regulation, inter-cellular interaction, and tissue/organ-level effects. Tissue-level effects can now be probed in vitro thanks to recently developed systems of three-dimensional (3D), multicellular, "organotypic" cell cultures, which mimic functional responses of living tissue. However, there remains a knowledge gap regarding interactions across different biological scales, complicating accurate prediction of health outcomes from molecular/genomic data and tissue responses. Systems biology aims at mathematical modeling of complex, non-linear biological systems. We propose to apply a systems biology approach to achieve a computational representation of tissue-level physiological responses by integrating empirical data derived from organotypic culture systems with computational models of intracellular pathways to better predict human responses. Successful implementation of this integrated approach will provide a powerful tool for faster, more accurate and cost-effective screening of potential toxicants and therapeutics. On September 11, 2015, an interdisciplinary group of scientists, engineers, and clinicians gathered for a workshop in Research Triangle Park, North Carolina, to discuss this ambitious goal. Participants represented laboratory-based and computational modeling approaches to pharmacology and toxicology, as well as the pharmaceutical industry, government, non-profits, and academia. Discussions focused on identifying critical system perturbations to model, the computational tools required, and the experimental approaches best suited to generating key data.
Experiments on Interfaces To Support Query Expansion.
ERIC Educational Resources Information Center
Beaulieu, M.
1997-01-01
Focuses on the user and human-computer interaction aspects of the research based on the Okapi text retrieval system. Three experiments implementing different approaches to query expansion are described, including the use of graphical user interfaces with different windowing techniques. (Author/LRW)
The Human-Computer Interface and Information Literacy: Some Basics and Beyond.
ERIC Educational Resources Information Center
Church, Gary M.
1999-01-01
Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…
ERIC Educational Resources Information Center
Faiola, Anthony; Matei, Sorin Adam
2010-01-01
The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…
ERIC Educational Resources Information Center
Brown, Abbie; Sugar, William
2004-01-01
A report on the efforts made to describe the range of human-computer interaction skills necessary to complete a program of study in Instructional Design Technology. Educators responsible for instructional media production courses have not yet articulated which among the wide range of possible interactions students must master for instructional…
1990-03-23
defined (personal communciation between R. Pozos and Simon, 1985). In summary, there have been studies dealing with shivering which indicate that the...microcomputer (IBM PS/2, Model 30/286). The Firearms Training System combines features of several technologies, notably: interactive video-disc/ computer ...technology and laser designator/camera/ computer /target-hit generation, which provides for immediate visual performance feedback. The subject is
Applicability of computational systems biology in toxicology.
Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie
2014-07-01
Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Scientific bases of human-machine communication by voice.
Schafer, R W
1995-01-01
The scientific bases for human-machine communication by voice are in the fields of psychology, linguistics, acoustics, signal processing, computer science, and integrated circuit technology. The purpose of this paper is to highlight the basic scientific and technological issues in human-machine communication by voice and to point out areas of future research opportunity. The discussion is organized around the following major issues in implementing human-machine voice communication systems: (i) hardware/software implementation of the system, (ii) speech synthesis for voice output, (iii) speech recognition and understanding for voice input, and (iv) usability factors related to how humans interact with machines. PMID:7479802
Argonne simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-04-01
A simulation framework has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS). The simulator is designed to run on parallel computers and distributed (networked) computer systems; however, a version for a stand alone workstation is also available. The ITS simulator includes an Expert Driver Model (EDM) of instrumented ``smart`` vehicles with in-vehicle navigation units. The EDM is capable of performing optimal route planning and communicating with Traffic Management Centers (TMC). A dynamic road map data base is sued for optimum route planning, where the data is updated periodically tomore » reflect any changes in road or weather conditions. The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces that includes human-factors studies to support safety and operational research. Realistic modeling of variations of the posted driving speed are based on human factor studies that take into consideration weather, road conditions, driver`s personality and behavior and vehicle type. The simulator has been developed on a distributed system of networked UNIX computers, but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of the developed simulator is that vehicles will be represented by autonomous computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. Vehicle processes interact with each other and with ITS components by exchanging messages. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.« less
2015 Marine Corps Security Environment Forecast: Futures 2030-2045
2015-01-01
The technologies that make the iPhone “smart” were publically funded—the Internet, wireless networks, the global positioning system, microelectronics...Energy Revolution (63 percent); Internet of Things (ubiquitous sensors embedded in interconnected computing devices) (50 percent); “Sci-Fi...Neuroscience & artificial intelligence - Sensors /control systems -Power & energy -Human-robot interaction Robots/autonomous systems will become part of the
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
A machine learning approach to improve contactless heart rate monitoring using a webcam.
Monkaresi, Hamed; Calvo, Rafael A; Yan, Hong
2014-07-01
Unobtrusive, contactless recordings of physiological signals are very important for many health and human-computer interaction applications. Most current systems require sensors which intrusively touch the user's skin. Recent advances in contact-free physiological signals open the door to many new types of applications. This technology promises to measure heart rate (HR) and respiration using video only. The effectiveness of this technology, its limitations, and ways of overcoming them deserves particular attention. In this paper, we evaluate this technique for measuring HR in a controlled situation, in a naturalistic computer interaction session, and in an exercise situation. For comparison, HR was measured simultaneously using an electrocardiography device during all sessions. The results replicated the published results in controlled situations, but show that they cannot yet be considered as a valid measure of HR in naturalistic human-computer interaction. We propose a machine learning approach to improve the accuracy of HR detection in naturalistic measurements. The results demonstrate that the root mean squared error is reduced from 43.76 to 3.64 beats/min using the proposed method.
Assessing the Impact of Educational Differences in HCI Design Practice
ERIC Educational Resources Information Center
Antunes, Pedro; Xiao, Lu; Pino, Jose A.
2014-01-01
Human-computer interaction (HCI) design generally involves collaboration from professionals in different disciplines. Trained in different design education systems, these professionals can have different conceptual understandings about design. Recognizing and identifying these differences are key issues for establishing shared design practices…
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
Holzinger, Andreas
2016-06-01
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.
Socio-inspired ICT. Towards a socially grounded society-ICT symbiosis
NASA Astrophysics Data System (ADS)
Ferscha, A.; Farrahi, K.; van den Hoven, J.; Hales, D.; Nowak, A.; Lukowicz, P.; Helbing, D.
2012-11-01
Modern ICT (Information and Communication Technology) has developed a vision where the "computer" is no longer associated with the concept of a single device or a network of devices, but rather the entirety of situated services originating in a digital world, which are perceived through the physical world. It is observed that services with explicit user input and output are becoming to be replaced by a computing landscape sensing the physical world via a huge variety of sensors, and controlling it via a plethora of actuators. The nature and appearance of computing devices is changing to be hidden in the fabric of everyday life, invisibly networked, and omnipresent, with applications greatly being based on the notions of context and knowledge. Interaction with such globe spanning, modern ICT systems will presumably be more implicit, at the periphery of human attention, rather than explicit, i.e. at the focus of human attention.Socio-inspired ICT assumes that future, globe scale ICT systems should be viewed as social systems. Such a view challenges research to identify and formalize the principles of interaction and adaptation in social systems, so as to be able to ground future ICT systems on those principles. This position paper therefore is concerned with the intersection of social behaviour and modern ICT, creating or recreating social conventions and social contexts through the use of pervasive, globe-spanning, omnipresent and participative ICT.
Educational technology, reimagined.
Eisenberg, Michael
2010-01-01
"Educational technology" is often equated in the popular imagination with "computers in the schools." But technology is much more than merely computers, and education is much more than mere schooling. The landscape of child-accessible technologies is blossoming in all sorts of directions: tools for communication, for physical construction and fabrication, and for human-computer interaction. These new systems and artifacts allow educational designers to think much more creatively about when and where learning takes place in children's lives, both within and outside the classroom.
A pen-based system to support pre-operative data collection within an anaesthesia department.
Sanz, M. F.; Gómez, E. J.; Trueba, I.; Cano, P.; Arredondo, M. T.; del Pozo, F.
1993-01-01
This paper describes the design and implementation of a pen-based computer system for remote preoperative data collection. The system is envisaged to be used by anaesthesia staff at different hospital scenarios where pre-operative data are generated. Pen-based technology offers important advantages in terms of portability and human-computer interaction, as direct manipulation interfaces by direct pointing, and "notebook user interfaces metaphors". Being the human factors analysis and user interface design a vital stage to achieve the appropriate user acceptability, a methodology that integrates the "usability" evaluation from the earlier development stages was used. Additionally, the selection of a pen-based computer system as a portable device to be used by health care personnel allows to evaluate the appropriateness of this new technology for remote data collection within the hospital environment. The work presented is currently being realised under the Research Project "TANIT: Telematics in Anaesthesia and Intensive Care", within the "A.I.M.--Telematics in Health CARE" European Research Program. PMID:8130488
Visidep (TM): A Three-Dimensional Imaging System For The Unaided Eye
NASA Astrophysics Data System (ADS)
McLaurin, A. Porter; Jones, Edwin R.; Cathey, LeConte
1984-05-01
The VISIDEP process for creating images in three dimensions on flat screens is suitable for photographic, electrographic and computer generated imaging systems. Procedures for generating these images vary from medium to medium due to the specific requirements of each technology. Imaging requirements for photographic and electrographic media are more directly tied to the hardware than are computer based systems. Applications of these technologies are not limited to entertainment, but have implications for training, interactive computer/video systems, medical imaging, and inspection equipment. Through minor modification the system can provide three-dimensional images with accurately measureable relationships for robotics and adds this factor for future developments in artificial intelligence. In almost any area requiring image analysis or critical review, VISIDEP provides the added advantage of three-dimensionality. All of this is readily accomplished without aids to the human eye. The system can be viewed in full color, false-color infra-red, and monochromatic modalities from any angle and is also viewable with a single eye. Thus, the potential of application for this developing system is extensive and covers the broad spectrum of human endeavor from entertainment to scientific study.
The Motor System: The Whole and its Parts
Otten, E.
2001-01-01
Our knowledge of components of the human motor system has been growing steadily, but our understanding of its integration into a system is lagging behind. It is suggested that a combination of measurements of forces and movements of the motor system in a functionally meaningful environment in conjunction with computer simulations of the motor system may help us in understanding motor system properties. Neurotrauma can be seen as a natural deviation, with recovery as a slow path to yet another deviant state of the motor system. In that form they may be useful in explaining the close interaction between form and function of the human motor system. PMID:11530882
Toward Usable Interactive Analytics: Coupling Cognition and Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris; Chang, Remco
Interactive analytics provide users a myriad of computational means to aid in extracting meaningful information from large and complex datasets. Much prior work focuses either on advancing the capabilities of machine-centric approaches by the data mining and machine learning communities, or human-driven methods by the visualization and CHI communities. However, these methods do not yet support a true human-machine symbiotic relationship where users and machines work together collaboratively and adapt to each other to advance an interactive analytic process. In this paper we discuss some of the inherent issues, outlining what we believe are the steps toward usable interactive analyticsmore » that will ultimately increase the effectiveness for both humans and computers to produce insights.« less
Su, Kuo-Wei; Liu, Cheng-Li
2012-06-01
A conventional Nursing Information System (NIS), which supports the role of nurse in some areas, is typically deployed as an immobile system. However, the traditional information system can't response to patients' conditions in real-time, causing delays on the availability of this information. With the advances of information technology, mobile devices are increasingly being used to extend the human mind's limited capacity to recall and process large numbers of relevant variables and to support information management, general administration, and clinical practice. Unfortunately, there have been few studies about the combination of a well-designed small-screen interface with a personal digital assistant (PDA) in clinical nursing. Some researchers found that user interface design is an important factor in determining the usability and potential use of a mobile system. Therefore, this study proposed a systematic approach to the development of a mobile nursing information system (MNIS) based on Mobile Human-Computer Interaction (M-HCI) for use in clinical nursing. The system combines principles of small-screen interface design with user-specified requirements. In addition, the iconic functions were designed with metaphor concept that will help users learn the system more quickly with less working-memory. An experiment involving learnability testing, thinking aloud and a questionnaire investigation was conducted for evaluating the effect of MNIS on PDA. The results show that the proposed MNIS is good on learning and higher satisfaction on symbol investigation, terminology and system information.
An evaluative model of system performance in manned teleoperational systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1989-01-01
Manned teleoperational systems are used in aerospace operations in which humans must interact with machines remotely. Manual guidance of remotely piloted vehicles, controling a wind tunnel, carrying out a scientific procedure remotely are examples of teleoperations. A four input parameter throughput (Tp) model is presented which can be used to evaluate complex, manned, teleoperations-based systems and make critical comparisons among candidate control systems. The first two parameters of this model deal with nominal (A) and off-nominal (B) predicted events while the last two focus on measured events of two types, human performance (C) and system performance (D). Digital simulations showed that the expression A(1-B)/C+D) produced the greatest homogeneity of variance and distribution symmetry. Results from a recently completed manned life science telescience experiment will be used to further validate the model. Complex, interacting teleoperational systems may be systematically evaluated using this expression much like a computer benchmark is used.
Merging Technology and Emotions: Introduction to Affective Computing.
Brigham, Tara J
2017-01-01
Affective computing technologies are designed to sense and respond based on human emotions. This technology allows a computer system to process the information gathered from various sensors to assess the emotional state of an individual. The system then offers a distinct response based on what it "felt." While this is completely unlike how most people interact with electronics today, this technology is likely to trickle into future everyday life. This column will explain what affective computing is, some of its benefits, and concerns with its adoption. It will also provide an overview of its implication in the library setting and offer selected examples of how and where it is currently being used.
Dialogue-Based Call: A Case Study on Teaching Pronouns
ERIC Educational Resources Information Center
Vlugter, P.; Knott, A.; McDonald, J.; Hall, C.
2009-01-01
We describe a computer assisted language learning (CALL) system that uses human-machine dialogue as its medium of interaction. The system was developed to help students learn the basics of the Maori language and was designed to accompany the introductory course in Maori running at the University of Otago. The student engages in a task-based…
1992-01-01
Norman .................................... University of California, San Diego, CA Dan R . Olsen, Jr ........................................ Brigham...Peter G. Poison .............................................. University of Colorado, Boulder, CO James R . Rhyne ................. IBM T J Watson...and artificial intelligence, among which are: * reasoning about concurrent systems, including program verification ( Barringer , 1985), operating
Intelligent control system based on ARM for lithography tool
NASA Astrophysics Data System (ADS)
Chen, Changlong; Tang, Xiaoping; Hu, Song; Wang, Nan
2014-08-01
The control system of traditional lithography tool is based on PC and MCU. The PC handles the complex algorithm, human-computer interaction, and communicates with MCU via serial port; The MCU controls motors and electromagnetic valves, etc. This mode has shortcomings like big volume, high power consumption, and wasting of PC resource. In this paper, an embedded intelligent control system of lithography tool, based on ARM, is provided. The control system used S5PV210 as processor, completing the functions of PC in traditional lithography tool, and provided a good human-computer interaction by using LCD and capacitive touch screen. Using Android4.0.3 as operating system, the equipment provided a cool and easy UI which made the control more user-friendly, and implemented remote control and debug, pushing video information of product by network programming. As a result, it's convenient for equipment vendor to provide technical support for users. Finally, compared with traditional lithography tool, this design reduced the PC part, making the hardware resources efficiently used and reducing the cost and volume. Introducing embedded OS and the concepts in "The Internet of things" into the design of lithography tool can be a development trend.
Human-Computer Interaction in Smart Environments
Paravati, Gianluca; Gatteschi, Valentina
2015-01-01
Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.
2004-01-01
Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.
A computer-aided movement analysis system.
Fioretti, S; Leo, T; Pisani, E; Corradini, M L
1990-08-01
Interaction with biomechanical data concerning human movement analysis implies the adoption of various experimental equipments and the choice of suitable models, data processing, and graphical data restitution techniques. The integration of measurement setups with the associated experimental protocols and the relative software procedures constitutes a computer-aided movement analysis (CAMA) system. In the present paper such integration is mapped onto the causes that limit the clinical acceptance of movement analysis methods. The structure of the system is presented. A specific CAMA system devoted to posture analysis is described in order to show the attainable features. Scientific results obtained with the support of the described system are also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.G.; Mioduszewski, R.J.
The Chemical Computer Man: Chemical Agent Response Simulation (CARS) is a computer model and simulation program for estimating the dynamic changes in human physiological dysfunction resulting from exposures to chemical-threat nerve agents. The newly developed CARS methodology simulates agent exposure effects on the following five indices of human physiological function: mental, vision, cardio-respiratory, visceral, and limbs. Mathematical models and the application of basic pharmacokinetic principles were incorporated into the simulation so that for each chemical exposure, the relationship between exposure dosage, absorbed dosage (agent blood plasma concentration), and level of physiological response are computed as a function of time. CARS,more » as a simulation tool, is designed for the users with little or no computer-related experience. The model combines maximum flexibility with a comprehensive user-friendly interactive menu-driven system. Users define an exposure problem and obtain immediate results displayed in tabular, graphical, and image formats. CARS has broad scientific and engineering applications, not only in technology for the soldier in the area of Chemical Defense, but also in minimizing animal testing in biomedical and toxicological research and the development of a modeling system for human exposure to hazardous-waste chemicals.« less
NASA Astrophysics Data System (ADS)
Yoo, Kyung-Hyan; Gretzel, Ulrike
Whether users are likely to accept the recommendations provided by a recommender system is of utmost importance to system designers and the marketers who implement them. By conceptualizing the advice seeking and giving relationship as a fundamentally social process, important avenues for understanding the persuasiveness of recommender systems open up. Specifically, research regarding the influence of source characteristics, which is abundant in the context of humanhuman relationships, can provide an important framework for identifying potential influence factors. This chapter reviews the existing literature on source characteristics in the context of human-human, human-computer, and human-recommender system interactions. It concludes that many social cues that have been identified as influential in other contexts have yet to be implemented and tested with respect to recommender systems. Implications for recommender system research and design are discussed.
A prototype system based on visual interactive SDM called VGC
NASA Astrophysics Data System (ADS)
Jia, Zelu; Liu, Yaolin; Liu, Yanfang
2009-10-01
In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.
From Network Analysis to Functional Metabolic Modeling of the Human Gut Microbiota.
Bauer, Eugen; Thiele, Ines
2018-01-01
An important hallmark of the human gut microbiota is its species diversity and complexity. Various diseases have been associated with a decreased diversity leading to reduced metabolic functionalities. Common approaches to investigate the human microbiota include high-throughput sequencing with subsequent correlative analyses. However, to understand the ecology of the human gut microbiota and consequently design novel treatments for diseases, it is important to represent the different interactions between microbes with their associated metabolites. Computational systems biology approaches can give further mechanistic insights by constructing data- or knowledge-driven networks that represent microbe interactions. In this minireview, we will discuss current approaches in systems biology to analyze the human gut microbiota, with a particular focus on constraint-based modeling. We will discuss various community modeling techniques with their advantages and differences, as well as their application to predict the metabolic mechanisms of intestinal microbial communities. Finally, we will discuss future perspectives and current challenges of simulating realistic and comprehensive models of the human gut microbiota.
Time Counts! Some Comments on System Latency in Head-Referenced Displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Adelstein, Bernard D.
2013-01-01
System response latency is a prominent characteristic of human-computer interaction. Laggy systems are; however, not simply annoying but substantially reduce user productivity. The impact of latency on head referenced display systems, particularly head-mounted systems, is especially disturbing since not only can it interfere with dynamic registration in augmented reality displays but it also can in some cases indirectly contribute to motion sickness. We will summarize several experiments using standard psychophysical discrimination techniques that suggest what system latencies will be required to achieve perceptual stability for spatially referenced computer-generated imagery. In conclusion I will speculate about other system performance characteristics that I would hope to have for a dream augmented reality system.
A Perspective on the Role of Computational Models in Immunology.
Chakraborty, Arup K
2017-04-26
This is an exciting time for immunology because the future promises to be replete with exciting new discoveries that can be translated to improve health and treat disease in novel ways. Immunologists are attempting to answer increasingly complex questions concerning phenomena that range from the genetic, molecular, and cellular scales to that of organs, whole animals or humans, and populations of humans and pathogens. An important goal is to understand how the many different components involved interact with each other within and across these scales for immune responses to emerge, and how aberrant regulation of these processes causes disease. To aid this quest, large amounts of data can be collected using high-throughput instrumentation. The nonlinear, cooperative, and stochastic character of the interactions between components of the immune system as well as the overwhelming amounts of data can make it difficult to intuit patterns in the data or a mechanistic understanding of the phenomena being studied. Computational models are increasingly important in confronting and overcoming these challenges. I first describe an iterative paradigm of research that integrates laboratory experiments, clinical data, computational inference, and mechanistic computational models. I then illustrate this paradigm with a few examples from the recent literature that make vivid the power of bringing together diverse types of computational models with experimental and clinical studies to fruitfully interrogate the immune system.
Automatic creation of three-dimensional avatars
NASA Astrophysics Data System (ADS)
Villa-Uriol, Maria-Cruz; Sainz, Miguel; Kuester, Falko; Bagherzadeh, Nader
2003-01-01
Highly accurate avatars of humans promise a new level of realism in engineering and entertainment applications, including areas such as computer animated movies, computer game development interactive virtual environments and tele-presence. In order to provide high-quality avatars, new techniques for the automatic acquisition and creation are required. A framework for the capture and construction of arbitrary avatars from image data is presented in this paper. Avatars are automatically reconstructed from multiple static images of a human subject by utilizing image information to reshape a synthetic three-dimensional articulated reference model. A pipeline is presented that combines a set of hardware-accelerated stages into one seamless system. Primary stages in this pipeline include pose estimation, skeleton fitting, body part segmentation, geometry construction and coloring, leading to avatars that can be animated and included into interactive environments. The presented system removes traditional constraints in the initial pose of the captured subject by using silhouette-based modification techniques in combination with a reference model. Results can be obtained in near-real time with very limited user intervention.
An innovative multimodal virtual platform for communication with devices in a natural way
NASA Astrophysics Data System (ADS)
Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.
2012-03-01
As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.
NASA Astrophysics Data System (ADS)
Roy, Jean; Breton, Richard; Paradis, Stephane
2001-08-01
Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.
DBSecSys: a database of Burkholderia mallei secretion systems.
Memišević, Vesna; Kumar, Kamal; Cheng, Li; Zavaljevski, Nela; DeShazer, David; Wallqvist, Anders; Reifman, Jaques
2014-07-16
Bacterial pathogenicity represents a major public health concern worldwide. Secretion systems are a key component of bacterial pathogenicity, as they provide the means for bacterial proteins to penetrate host-cell membranes and insert themselves directly into the host cells' cytosol. Burkholderia mallei is a Gram-negative bacterium that uses multiple secretion systems during its host infection life cycle. To date, the identities of secretion system proteins for B. mallei are not well known, and their pathogenic mechanisms of action and host factors are largely uncharacterized. We present the Database of Burkholderia malleiSecretion Systems (DBSecSys), a compilation of manually curated and computationally predicted bacterial secretion system proteins and their host factors. Currently, DBSecSys contains comprehensive experimentally and computationally derived information about B. mallei strain ATCC 23344. The database includes 143 B. mallei proteins associated with five secretion systems, their 1,635 human and murine interacting targets, and the corresponding 2,400 host-B. mallei interactions. The database also includes information about 10 pathogenic mechanisms of action for B. mallei secretion system proteins inferred from the available literature. Additionally, DBSecSys provides details about 42 virulence attenuation experiments for 27 B. mallei secretion system proteins. Users interact with DBSecSys through a Web interface that allows for data browsing, querying, visualizing, and downloading. DBSecSys provides a comprehensive, systematically organized resource of experimental and computational data associated with B. mallei secretion systems. It provides the unique ability to study secretion systems not only through characterization of their corresponding pathogen proteins, but also through characterization of their host-interacting partners.The database is available at https://applications.bhsai.org/dbsecsys.
Fusing human and machine skills for remote robotic operations
NASA Technical Reports Server (NTRS)
Schenker, Paul S.; Kim, Won S.; Venema, Steven C.; Bejczy, Antal K.
1991-01-01
The question of how computer assists can improve teleoperator trajectory tracking during both free and force-constrained motions is addressed. Computer graphics techniques which enable the human operator to both visualize and predict detailed 3D trajectories in real-time are reported. Man-machine interactive control procedures for better management of manipulator contact forces and positioning are also described. It is found that collectively, these novel advanced teleoperations techniques both enhance system performance and significantly reduce control problems long associated with teleoperations under time delay. Ongoing robotic simulations of the 1984 space shuttle Solar Maximum EVA Repair Mission are briefly described.
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Cognitive Architectures and Human-Computer Interaction. Introduction to Special Issue.
ERIC Educational Resources Information Center
Gray, Wayne D.; Young, Richard M.; Kirschenbaum, Susan S.
1997-01-01
In this introduction to a special issue on cognitive architectures and human-computer interaction (HCI), editors and contributors provide a brief overview of cognitive architectures. The following four architectures represented by articles in this issue are: Soar; LICAI (linked model of comprehension-based action planning and instruction taking);…
Factors Influencing Adoption of Ubiquitous Internet amongst Students
ERIC Educational Resources Information Center
Juned, Mohammad; Adil, Mohd
2015-01-01
Weiser's (1991) conceptualisation of a world wherein human's interaction with computer technology would no longer be limited to conventional input and output devices, has now been translated into a reality with human's constant interaction with multiple interconnected computers and sensors embedded in rooms, furniture, clothes, tools, and other…
NASA Technical Reports Server (NTRS)
1993-01-01
Using chordic technology, a data entry operator can finger key combinations for text or graphics input. Because only one hand is needed, a disabled person may use it. Strain and fatigue are less than when using a conventional keyboard; input is faster, and the system can be learned in about an hour. Infogrip, Inc. developed chordic input technology with Stennis Space Center (SSC). (NASA is interested in potentially faster human/computer interaction on spacecraft as well as a low cost tactile/visual training system for the handicapped.) The company is now marketing the BAT as an improved system for both disabled and non-disabled computer operators.
Using Interactive Computer to Communicate Scientific Information.
ERIC Educational Resources Information Center
Selnow, Gary W.
1988-01-01
Asks whether the computer is another channel of communication, if its interactive qualities make it an information source, or if it is an undefined hybrid. Concludes that computers are neither the medium nor the source but will in the future provide the possibility of a sophisticated interaction between human intelligence and artificial…
Human-technology interaction for standoff IED detection
NASA Astrophysics Data System (ADS)
Zhang, Evan; Zou, Yiyang; Zachrich, Liping; Fulton, Jack
2011-03-01
IEDs kill our soldiers and innocent people every day. Lessons learned from Iraq and Afghanistan clearly indicated that IEDs cannot be detected/defeated by technology alone; human-technology interaction must be engaged. In most cases, eye is the best detector, brain is the best computer, and technologies are tools, they must be used by human being properly then can achieve full functionality. In this paper, a UV Raman/fluorescence, CCD and LWIR 3 sensor fusion system for standoff IED detection and a handheld fusion system for close range IED detection are developed and demonstrated. We must train solders using their eyes or CCD/LWIR cameras to do wide area search while on the move to find small suspected area first then use the spectrometer because the laser spot is too small, to scan a one-mile long and 2-meter wide road needs 185 days although our fusion system can detect the IED in 30m with 1s interrogating time. Even if the small suspected area (e.g., 0.5mx0.5m) is found, human eyes still cannot detect the IED, soldiers must use or interact with the technology - laser based spectrometer to scan the area then they are able to detect and identify the IED in 10 minutes not 185 days. Therefore, the human-technology interaction approach will be the best solution for IED detection.
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea
2000-01-01
The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.
NASA Technical Reports Server (NTRS)
Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott
2013-01-01
The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants' mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99% in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text "chat" communications, manipulation of procedures/checklists, cataloguing/annotating images, scientific note taking, human-robot interaction, and control of suit and/or other EVA systems.
NASA Technical Reports Server (NTRS)
Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott
2013-01-01
The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.
The human factors of workstation telepresence
NASA Technical Reports Server (NTRS)
Smith, Thomas J.; Smith, Karl U.
1990-01-01
The term workstation telepresence has been introduced to describe human-telerobot compliance, which enables the human operator to effectively project his/her body image and behavioral skills to control of the telerobot itself. Major human-factors considerations for establishing high fidelity workstation telepresence during human-telerobot operation are discussed. Telerobot workstation telepresence is defined by the proficiency and skill with which the operator is able to control sensory feedback from direct interaction with the workstation itself, and from workstation-mediated interaction with the telerobot. Numerous conditions influencing such control have been identified. This raises the question as to what specific factors most critically influence the realization of high fidelity workstation telepresence. The thesis advanced here is that perturbations in sensory feedback represent a major source of variability in human performance during interactive telerobot operation. Perturbed sensory feedback research over the past three decades has established that spatial transformations or temporal delays in sensory feedback engender substantial decrements in interactive task performance, which training does not completely overcome. A recently developed social cybernetic model of human-computer interaction can be used to guide this approach, based on computer-mediated tracking and control of sensory feedback. How the social cybernetic model can be employed for evaluating the various modes, patterns, and integrations of interpersonal, team, and human-computer interactions which play a central role is workstation telepresence are discussed.
Mostafa, Marwa Mostafa; Nassef, Mohammad; Badr, Amr
2016-10-01
Salmonella and Escherichia coli are different types of bacteria that cause food poisoning in humans. In the elderly, infants and people with chronic conditions, it is very dangerous if Salmonella or E. coli gets into the bloodstream and then they must be treated by phage therapy. Treating Salmonella and E. coli by phage therapy affects the gut flora. This research paper presents a system for detecting the effects of virulent E. coli and Salmonella bacteriophages on human gut. A method based on Domain-Domain Interactions (DDIs) model is implemented in the proposed system to determine the interactions between the proteins of human gut bacteria and the proteins of bacteriophages that infect virulent E. coli and Salmonella. The system helps gastroenterologists to realize the effect of injecting bacteriophages that infect virulent E. coli and Salmonella on the human gut. By testing the system over Enterobacteria phage 933W, Enterobacteria phage VT2-Sa and Enterobacteria phage P22, it resulted in four interactions between the proteins of the bacteriophages that infect E. coli O157:H7, E. coli O104:H4 and Salmonella typhimurium and the proteins of human gut bacterium strains. Several effects were detected such as: antibacterial activity against a number of bacterial species in human gut, regulation of cellular differentiation and organogenesis during gut, lung, and heart development, ammonia assimilation in bacteria, yeasts, and plants, energizing defense system and its function in the detoxification of lipopolysaccharide, and in the prevention of bacterial translocation in human gut. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Safety Metrics for Human-Computer Controlled Systems
NASA Technical Reports Server (NTRS)
Leveson, Nancy G; Hatanaka, Iwao
2000-01-01
The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.
Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.
Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C
2016-03-01
Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2017-01-01
A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. The difficulty addressed here is the fact that, because of metamerism, we cannot know with certainty the spectrum that produced a particular color solely on the basis of sensory data. Knowledge of the spectrum is not required to compute additive mixture of colors, but is critical for subtractive (multiplicative) mixture. Therefore, we cannot predict with certainty the multiplicative interactions between colors based solely on sensory data. There are two potential applications of a color algebra: first, to aid modeling phenomena of human visual perception, such as color constancy and transparency; and, second, to provide better models of the interactions of lights and surfaces for computer graphics rendering.
Anderson, Thomas G.
2004-12-21
The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.
Human agency beliefs influence behaviour during virtual social interactions.
Caruana, Nathan; Spirou, Dean; Brock, Jon
2017-01-01
In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an "intentional stance" by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative "joint attention" game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm ("Computer" condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room ("Human" condition). Those in the "Human" condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the "Computer" condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals.
Privacy preserving interactive record linkage (PPIRL).
Kum, Hye-Chung; Krishnamurthy, Ashok; Machanavajjhala, Ashwin; Reiter, Michael K; Ahalt, Stanley
2014-01-01
Record linkage to integrate uncoordinated databases is critical in biomedical research using Big Data. Balancing privacy protection against the need for high quality record linkage requires a human-machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic Big Data. In the computer science literature, private record linkage is the most published area. It investigates how to apply a known linkage function safely when linking two tables. However, in practice, the linkage function is rarely known. Thus, there are many data linkage centers whose main role is to be the trusted third party to determine the linkage function manually and link data for research via a master population list for a designated region. Recently, a more flexible computerized third-party linkage platform, Secure Decoupled Linkage (SDLink), has been proposed based on: (1) decoupling data via encryption, (2) obfuscation via chaffing (adding fake data) and universe manipulation; and (3) minimum information disclosure via recoding. We synthesize this literature to formalize a new framework for privacy preserving interactive record linkage (PPIRL) with tractable privacy and utility properties and then analyze the literature using this framework. Human-based third-party linkage centers for privacy preserving record linkage are the accepted norm internationally. We find that a computer-based third-party platform that can precisely control the information disclosed at the micro level and allow frequent human interaction during the linkage process, is an effective human-machine hybrid system that significantly improves on the linkage center model both in terms of privacy and utility.
Stieglitz, T
2007-01-01
Today applications of neural prostheses that successfully help patients to increase their activities of daily living and participate in social life again are quite simple implants that yield definite tissue response and are well recognized as foreign body. Latest developments in genetic engineering, nanotechnologies and materials sciences have paved the way to new scenarios towards highly complex systems to interface the human nervous system. Combinations of neural cells with microimplants promise stable biohybrid interfaces. Nanotechnology opens the door to macromolecular landscapes on implants that mimic the biologic topology and surface interaction of biologic cells. Computer sciences dream of technical cognitive systems that act and react due to knowledge-based conclusion mechanisms to a changing or adaptive environment. Different sciences start to interact and discuss the synergies when methods and paradigms from biology, computer sciences and engineering, neurosciences, psychology will be combined. They envision the era of "converging technologies" to completely change the understanding of science and postulate a new vision of humans. In this chapter, these research lines will be discussed on some examples as well as the societal implications and ethical questions that arise from these new opportunities.
Harnessing the Power of Interactivity for Instruction.
ERIC Educational Resources Information Center
Borsook, Terry K.
Arguing that what sets the computer apart from all other teaching devices is its potential for interactivity, this paper examines the concept of interactivity and explores ways in which its power can be harnessed and put to work. A discussion of interactivity in human-to-human communication sets a context within which to view human/computer…
Learning through social interaction in game technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waern, Annika; Raybourn, Elaine Marie
2005-05-01
The present ITSE journal special issue on 'Learning About Social Interaction through Gaming' is the result of an invitation to the attendees of a one-day workshop on 'Social Learning Through Gaming' co-organized by the guest editors and held at the Human Factors in Computing Systems (CHI) conference on April 26, 2004 in Vienna, Austria. CHI is one of the premiere conferences on human-computer interaction. CHI 2004 attracted hundreds of delegates from all over the world. The CHI workshop program results from a competitive selection process. The Social Learning through Gaming workshop was filled to capacity and attended by approximately 25more » participants from Europe and North America who submitted position papers that were refereed and selected for participation based on the relevancy and innovativeness of the research. The participants came together to share research on play, learning, games, interactive technologies, and what playing and designing games can teach us about social behaviors. The present special issue focuses on learning about social aspects through gaming: learning to socialize through games and learning games through social behavior.« less
A Project-Based Learning Setting to Human-Computer Interaction for Teenagers
ERIC Educational Resources Information Center
Geyer, Cornelia; Geisler, Stefan
2012-01-01
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give…
NASA Astrophysics Data System (ADS)
Heinert, G.; Mondorf, W.
1982-11-01
High speed image processing was used to analyse morphologic and metabolic characteristics of clinically relevant kidney tissue alterations.Qualitative computer-assisted histophotometry was performed to measure alterations in levels of the enzymes alkaline phosphatase (Ap),alanine aminopeptidase (AAP),g-glutamyltranspepti-dase (GGTP) and A-glucuronidase (B-G1) and AAP and GGTP immunologically determined in prepared renal and cancer tissue sections. A "Mioro-Videomat 2" image analysis system with a "Tessovar" macroscope,a computer-assisted "Axiomat" photomicroscope and an "Interactive Image Analysis System (IBAS)" were employed for analysing changes in enzyme activities determined by changes in absorbance or transmission.Diseased kidney as well as renal neoplastic tissues could be distinguished by significantly (wilcoxon test,p<0,05) decreased enzyme concentrations as compared to those found in normal human kidney tissues.This image analysis techniques might be of potential use in diagnostic and prognostic evaluation of renal cancer and diseased kidney tissues.
Coordination dynamics in a socially situated nervous system
Coey, Charles A.; Varlet, Manuel; Richardson, Michael J.
2012-01-01
Traditional theories of cognitive science have typically accounted for the organization of human behavior by detailing requisite computational/representational functions and identifying neurological mechanisms that might perform these functions. Put simply, such approaches hold that neural activity causes behavior. This same general framework has been extended to accounts of human social behavior via concepts such as “common-coding” and “co-representation” and much recent neurological research has been devoted to brain structures that might execute these social-cognitive functions. Although these neural processes are unquestionably involved in the organization and control of human social interactions, there is good reason to question whether they should be accorded explanatory primacy. Alternatively, we propose that a full appreciation of the role of neural processes in social interactions requires appropriately situating them in their context of embodied-embedded constraints. To this end, we introduce concepts from dynamical systems theory and review research demonstrating that the organization of human behavior, including social behavior, can be accounted for in terms of self-organizing processes and lawful dynamics of animal-environment systems. Ultimately, we hope that these alternative concepts can complement the recent advances in cognitive neuroscience and thereby provide opportunities to develop a complete and coherent account of human social interaction. PMID:22701413
NASA Technical Reports Server (NTRS)
Kriegler, F. J.
1974-01-01
The MIDAS System is described as a third-generation fast multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turnaround time and significant gains in throughput. The hardware and software are described. The system contains a mini-computer to control the various high-speed processing elements in the data path, and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 200,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation.
The need and potential for building a integrated knowledge-base of the Earth-Human system
NASA Astrophysics Data System (ADS)
Jacobs, Clifford
2011-03-01
The pursuit of scientific understanding is increasingly based on interdisciplinary research. To understand more deeply the planet and its interactions requires a progressively more holistic approach, exploring knowledge coming from all scientific and engineering disciplines including but not limited to, biology, chemistry, computer sciences, geosciences, material sciences, mathematics, physics, cyberinfrastucture, and social sciences. Nowhere is such an approach more critical than in the study of global climate change in which one of the major challenges is the development of next-generation Earth System Models that include coupled and interactive representations of ecosystems, agricultural working lands and forests, urban environments, biogeochemistry, atmospheric chemistry, ocean and atmospheric currents, the water cycle, land ice, and human activities.
Application Reuse Library for Software, Requirements, and Guidelines
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Thronesbery, Carroll
1994-01-01
Better designs are needed for expert systems and other operations automation software, for more reliable, usable and effective human support. A prototype computer-aided Application Reuse Library shows feasibility of supporting concurrent development and improvement of advanced software by users, analysts, software developers, and human-computer interaction experts. Such a library expedites development of quality software, by providing working, documented examples, which support understanding, modification and reuse of requirements as well as code. It explicitly documents and implicitly embodies design guidelines, standards and conventions. The Application Reuse Library provides application modules with Demo-and-Tester elements. Developers and users can evaluate applicability of a library module and test modifications, by running it interactively. Sub-modules provide application code and displays and controls. The library supports software modification and reuse, by providing alternative versions of application and display functionality. Information about human support and display requirements is provided, so that modifications will conform to guidelines. The library supports entry of new application modules from developers throughout an organization. Example library modules include a timer, some buttons and special fonts, and a real-time data interface program. The library prototype is implemented in the object-oriented G2 environment for developing real-time expert systems.
Intelligent Adaptive Interface: A Design Tool for Enhancing Human-Machine System Performances
2009-10-01
and customizable. Thus, an intelligent interface should tailor its parameters to certain prescribed specifications or convert itself and adjust to...Computer Interaction 3(2): 87-122. [51] Schereiber, G., Akkermans, H., Anjewierden, A., de Hoog , R., Shadbolt, N., Van de Velde, W., & Wielinga, W
A histological ontology of the human cardiovascular system.
Mazo, Claudia; Salazar, Liliana; Corcho, Oscar; Trujillo, Maria; Alegre, Enrique
2017-10-02
In this paper, we describe a histological ontology of the human cardiovascular system developed in collaboration among histology experts and computer scientists. The histological ontology is developed following an existing methodology using Conceptual Models (CMs) and validated using OOPS!, expert evaluation with CMs, and how accurately the ontology can answer the Competency Questions (CQ). It is publicly available at http://bioportal.bioontology.org/ontologies/HO and https://w3id.org/def/System . The histological ontology is developed to support complex tasks, such as supporting teaching activities, medical practices, and bio-medical research or having natural language interactions.
Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2015-04-01
Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.
Numerical Models of Human Circulatory System under Altered Gravity: Brain Circulation
NASA Technical Reports Server (NTRS)
Kim, Chang Sung; Kiris, Cetin; Kwak, Dochan; David, Tim
2003-01-01
A computational fluid dynamics (CFD) approach is presented to model the blood flow through the human circulatory system under altered gravity conditions. Models required for CFD simulation relevant to major hemodynamic issues are introduced such as non-Newtonian flow models governed by red blood cells, a model for arterial wall motion due to fluid-wall interactions, a vascular bed model for outflow boundary conditions, and a model for auto-regulation mechanism. The three-dimensional unsteady incompressible Navier-Stokes equations coupled with these models are solved iteratively using the pseudocompressibility method and dual time stepping. Moving wall boundary conditions from the first-order fluid-wall interaction model are used to study the influence of arterial wall distensibility on flow patterns and wall shear stresses during the heart pulse. A vascular bed modeling utilizing the analogy with electric circuits is coupled with an auto-regulation algorithm for multiple outflow boundaries. For the treatment of complex geometry, a chimera overset grid technique is adopted to obtain connectivity between arterial branches. For code validation, computed results are compared with experimental data for steady and unsteady non-Newtonian flows. Good agreement is obtained for both cases. In sin-type Gravity Benchmark Problems, gravity source terms are added to the Navier-Stokes equations to study the effect of gravitational variation on the human circulatory system. This computational approach is then applied to localized blood flows through a realistic carotid bifurcation and two Circle of Willis models, one using an idealized geometry and the other model using an anatomical data set. A three- dimensional anatomical Circle of Willis configuration is reconstructed from human-specific magnetic resonance images using an image segmentation method. The blood flow through these Circle of Willis models is simulated to provide means for studying gravitational effects on the brain circulation under auto-regulation.
Evidence Report: Risk of Inadequate Human-Computer Interaction
NASA Technical Reports Server (NTRS)
Holden, Kritina; Ezer, Neta; Vos, Gordon
2013-01-01
Human-computer interaction (HCI) encompasses all the methods by which humans and computer-based systems communicate, share information, and accomplish tasks. When HCI is poorly designed, crews have difficulty entering, navigating, accessing, and understanding information. HCI has rarely been studied in an operational spaceflight context, and detailed performance data that would support evaluation of HCI have not been collected; thus, we draw much of our evidence from post-spaceflight crew comments, and from other safety-critical domains like ground-based power plants, and aviation. Additionally, there is a concern that any potential or real issues to date may have been masked by the fact that crews have near constant access to ground controllers, who monitor for errors, correct mistakes, and provide additional information needed to complete tasks. We do not know what types of HCI issues might arise without this "safety net". Exploration missions will test this concern, as crews may be operating autonomously due to communication delays and blackouts. Crew survival will be heavily dependent on available electronic information for just-in-time training, procedure execution, and vehicle or system maintenance; hence, the criticality of the Risk of Inadequate HCI. Future work must focus on identifying the most important contributing risk factors, evaluating their contribution to the overall risk, and developing appropriate mitigations. The Risk of Inadequate HCI includes eight core contributing factors based on the Human Factors Analysis and Classification System (HFACS): (1) Requirements, policies, and design processes, (2) Information resources and support, (3) Allocation of attention, (4) Cognitive overload, (5) Environmentally induced perceptual changes, (6) Misperception and misinterpretation of displayed information, (7) Spatial disorientation, and (8) Displays and controls.
ERIC Educational Resources Information Center
Carter, Elizabeth J.; Williams, Diane L.; Hodgins, Jessica K.; Lehman, Jill F.
2014-01-01
Few direct comparisons have been made between the responsiveness of children with autism to computer-generated or animated characters and their responsiveness to humans. Twelve 4-to 8-year-old children with autism interacted with a human therapist; a human-controlled, interactive avatar in a theme park; a human actor speaking like the avatar; and…
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
Human-computer interaction in distributed supervisory control tasks
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1989-01-01
An overview of activities concerned with the development and applications of the Operator Function Model (OFM) is presented. The OFM is a mathematical tool to represent operator interaction with predominantly automated space ground control systems. The design and assessment of an intelligent operator aid (OFMspert and Ally) is particularly discussed. The application of OFM to represent the task knowledge in the design of intelligent tutoring systems, designated OFMTutor and ITSSO (Intelligent Tutoring System for Satellite Operators), is also described. Viewgraphs from symposia presentations are compiled along with papers addressing the intent inferencing capabilities of OFMspert, the OFMTutor system, and an overview of intelligent tutoring systems and the implications for complex dynamic systems.
Almost human: Anthropomorphism increases trust resilience in cognitive agents.
de Visser, Ewart J; Monfort, Samuel S; McKendrick, Ryan; Smith, Melissa A B; McKnight, Patrick E; Krueger, Frank; Parasuraman, Raja
2016-09-01
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human–automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism—the degree to which an agent exhibits human characteristics—is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human–agent trust as well as novel automation design. PsycINFO Database Record (c) 2016 APA, all rights reserved
Information visualization: Beyond traditional engineering
NASA Technical Reports Server (NTRS)
Thomas, James J.
1995-01-01
This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Embodied cognition for autonomous interactive robots.
Hoffman, Guy
2012-10-01
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.
Koperwhats, Martha A; Chang, Wei-Chih; Xiao, Jianguo
2002-01-01
Digital imaging technology promises efficient, economical, and fast service for patient care, but the challenges are great in the transition from film to a filmless (digital) environment. This change has a significant impact on the film library's personnel (film librarians) who play a leading roles in storage, classification, and retrieval of images. The objectives of this project were to study film library errors and the usability of a physical computerized system that could not be changed, while developing an intervention to reduce errors and test the usability of the intervention. Cognitive and human factors analysis were used to evaluate human-computer interaction. A workflow analysis was performed to understand the film and digital imaging processes. User and task analyses were applied to account for all behaviors involved in interaction with the system. A heuristic evaluation was used to probe the usability issues in the picture archiving and communication systems (PACS) modules. Simplified paper-based instructions were designed to familiarize the film librarians with the digital system. A usability survey evaluated the effectiveness of the instruction. The user and task analyses indicated that different users faced challenges based on their computer literacy, education, roles, and frequency of use of diagnostic imaging. The workflow analysis showed that the approaches to using the digital library differ among the various departments. The heuristic evaluation of the PACS modules showed the human-computer interface to have usability issues that prevented easy operation. Simplified instructions were designed for operation of the modules. Usability surveys conducted before and after revision of the instructions showed that performance improved. Cognitive and human factor analysis can help film librarians and other users adapt to the filmless system. Use of cognitive science tools will aid in successful transition of the film library from a film environment to a digital environment.
Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L
2017-01-01
Background Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. Objective The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users’ verbal responses, more closely mirroring a human-delivered motivational intervention. Methods We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Results Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Conclusions Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. PMID:28659259
Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective
Mattout, Jérémie
2012-01-01
A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291
Designing Guiding Systems for Brain-Computer Interfaces
Kosmyna, Nataliya; Lécuyer, Anatole
2017-01-01
Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
1991-01-01
Natural environments have a content, i.e., the objects in them; a geometry, i.e., a pattern of rules for positioning and displacing the objects; and a dynamics, i.e., a system of rules describing the effects of forces acting on the objects. Human interaction with most common natural environments has been optimized by centuries of evolution. Virtual environments created through the human-computer interface similarly have a content, geometry, and dynamics, but the arbitrary character of the computer simulation creating them does not insure that human interaction with these virtual environments will be natural. The interaction, indeed, could be supernatural but it also could be impossible. An important determinant of the comprehensibility of a virtual environment is the correspondence between the environmental frames of reference and those associated with the control of environmental objects. The effects of rotation and displacement of control frames of reference with respect to corresponding environmental references differ depending upon whether perceptual judgement or manual tracking performance is measured. The perceptual effects of frame of reference displacement may be analyzed in terms of distortions in the process of virtualizing the synthetic environment space. The effects of frame of reference displacement and rotation have been studied by asking subjects to estimate exocentric direction in a virtual space.
ERIC Educational Resources Information Center
Isaias, Pedro; Issa, Tomayess; Pena, Nuno
2014-01-01
When developing and working with various types of devices from a supercomputer to an iPod Mini, it is essential to consider the issues of Human Computer Interaction (HCI) and Usability. Developers and designers must incorporate HCI, Usability and user satisfaction in their design plans to ensure that systems are easy to learn, effective,…
NASA Technical Reports Server (NTRS)
Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.
1993-01-01
Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.
Human computer confluence applied in healthcare and rehabilitation.
Viaud-Delmon, Isabelle; Gaggioli, Andrea; Ferscha, Alois; Dunne, Stephen
2012-01-01
Human computer confluence (HCC) is an ambitious research program studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding. It is an interdisciplinary field, bringing together researches from horizons as various as pervasive computing, bio-signals processing, neuroscience, electronics, robotics, virtual & augmented reality, and provides an amazing potential for applications in medicine and rehabilitation.
Studying the neurobiology of human social interaction: Making the case for ecological validity.
Hogenelst, Koen; Schoevers, Robert A; aan het Rot, Marije
2015-01-01
With this commentary we make the case for an increased focus on the ecological validity of the measures used to assess aspects of human social functioning. Impairments in social functioning are seen in many types of psychopathology, negatively affecting the lives of psychiatric patients and those around them. Yet the neurobiology underlying abnormal social interaction remains unclear. As an example of human social neuroscience research with relevance to biological psychiatry and clinical psychopharmacology, this commentary discusses published experimental studies involving manipulation of the human brain serotonin system that included assessments of social behavior. To date, these studies have mostly been laboratory-based and included computer tasks, observations by others, or single-administration self-report measures. Most laboratory measures used so far inform about the role of serotonin in aspects of social interaction, but the relevance for real-life interaction is often unclear. Few studies have used naturalistic assessments in real life. We suggest several laboratory methods with high ecological validity as well as ecological momentary assessment, which involves intensive repeated measures in naturalistic settings. In sum, this commentary intends to stimulate experimental research on the neurobiology of human social interaction as it occurs in real life.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kavi, Srinu
1984-01-01
This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Data analysis and integration of environmental sensors to meet human needs
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; De Rango, Floriano; Barletta, Domenico; Falbo, Domenico; Imbrogno, Alessandro
2014-05-01
Nowadays one of the main task of technology is to make people's life simpler and easier. Ambient intelligence is an emerging discipline that brings intelligence to environments making them sensitive to us. This discipline has developed following the spread of sensors devices, sensor networks, pervasive computing and artificial intelligence. In this work, we attempt to enhance the Internet Of Things (loT) with intelligence and environments exploring various interactions between humans' beings and the environment they live in. In particular, the core of the system is composed of an automation system, which is made up with a domotic control unit and several sensors installed in the environment. The task of the sensors is to collect information from the environment and to send them to the control unit. Once the information is collected, the core combines them in order to infer the most accurate human needs. The knowledge of human needs and the current environment status compose the inputs of the intelligence block whose main goal is to find the right automations to satisfy human needs in a real time way. The system also provides a Speech Recognition service which allow users to interact with the system by their voice so human speech can be considered as additional input for smart automatisms.
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.
1974-01-01
The MIDAS System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughout. The hardware and software generated in Phase I of the over-all program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating 2 x 105 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. Diagnostic programs used to test MIDAS' operations are presented.
Lenior, O N M
2012-01-01
The challenges put on large baggage systems by airports can be summarized as: handling a high number of bags in a short period of time, in a limited space, with all sorts of disruptions, whilst complying with stringent regulation upon security, sustainability and health and safety. The aim of this company case study is to show in the different project phases--as indicated in the system ergonomic approach--how the human factors specialist can play a major part in tackling these challenges. By describing different projects in terms of scope, organization, human factors topics covered, phases and lessons learned, the importance of Human-Computer Interaction, automation as well as manual handling and work organization in baggage is addressed.
NASA Astrophysics Data System (ADS)
Barrett, Christopher L.; Bisset, Keith; Chen, Jiangzhuo; Eubank, Stephen; Lewis, Bryan; Kumar, V. S. Anil; Marathe, Madhav V.; Mortveit, Henning S.
Human behavior, social networks, and the civil infrastructures are closely intertwined. Understanding their co-evolution is critical for designing public policies and decision support for disaster planning. For example, human behaviors and day to day activities of individuals create dense social interactions that are characteristic of modern urban societies. These dense social networks provide a perfect fabric for fast, uncontrolled disease propagation. Conversely, people’s behavior in response to public policies and their perception of how the crisis is unfolding as a result of disease outbreak can dramatically alter the normally stable social interactions. Effective planning and response strategies must take these complicated interactions into account. In this chapter, we describe a computer simulation based approach to study these issues using public health and computational epidemiology as an illustrative example. We also formulate game-theoretic and stochastic optimization problems that capture many of the problems that we study empirically.
KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.
Supervised interpretation of echocardiograms with a psychological model of expert supervision
NASA Astrophysics Data System (ADS)
Revankar, Shriram V.; Sher, David B.; Shalin, Valerie L.; Ramamurthy, Maya
1993-07-01
We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.
Social robots as embedded reinforcers of social behavior in children with autism.
Kim, Elizabeth S; Berkovits, Lauren D; Bernier, Emily P; Leyzberg, Dan; Shic, Frederick; Paul, Rhea; Scassellati, Brian
2013-05-01
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.
ERIC Educational Resources Information Center
Siler, Stephanie Ann; VanLehn, Kurt
2009-01-01
Face-to-face (FTF) human-human tutoring has ranked among the most effective forms of instruction. However, because computer-mediated (CM) tutoring is becoming increasingly common, it is instructive to evaluate its effectiveness relative to face-to-face tutoring. Does the lack of spoken, face-to-face interaction affect learning gains and…
ERIC Educational Resources Information Center
Tsai, Yueh-Feng; Kaufman, David
2014-01-01
Previous research by Tsai and Kaufman (2010a, 2010b) has suggested that computer-simulated virtual pet dogs can be used as a potential medium to enhance children's development of empathy and humane attitudes toward animals. To gain a deeper understanding of how and why interacting with a virtual pet dog might influence children's social and…
Multimodal Neuroelectric Interface Development
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Totah, Joseph (Technical Monitor)
2001-01-01
This project aims to improve performance of NASA missions by developing multimodal neuroelectric technologies for augmented human-system interaction. Neuroelectric technologies will add completely new modes of interaction that operate in parallel with keyboards, speech, or other manual controls, thereby increasing the bandwidth of human-system interaction. We recently demonstrated the feasibility of real-time electromyographic (EMG) pattern recognition for a direct neuroelectric human-computer interface. We recorded EMG signals from an elastic sleeve with dry electrodes, while a human subject performed a range of discrete gestures. A machine-teaming algorithm was trained to recognize the EMG patterns associated with the gestures and map them to control signals. Successful applications now include piloting two Class 4 aircraft simulations (F-15 and 757) and entering data with a "virtual" numeric keyboard. Current research focuses on on-line adaptation of EMG sensing and processing and recognition of continuous gestures. We are also extending this on-line pattern recognition methodology to electroencephalographic (EEG) signals. This will allow us to bypass muscle activity and draw control signals directly from the human brain. Our system can reliably detect P-rhythm (a periodic EEG signal from motor cortex in the 10 Hz range) with a lightweight headset containing saline-soaked sponge electrodes. The data show that EEG p-rhythm can be modulated by real and imaginary motions. Current research focuses on using biofeedback to train of human subjects to modulate EEG rhythms on demand, and to examine interactions of EEG-based control with EMG-based and manual control. Viewgraphs on these neuroelectric technologies are also included.
ERIC Educational Resources Information Center
Gooch, Sherwin
The original PLATO music concept was to replace the human performer in the feedback process, wherein the composer specifies an action and monitors the outcome, with a computer-controlled device. The first device of this type is known as the Gooch Synthetic Woodwind (GSW), which attempted to provide some of the features needed in an interactive,…
Intelligent Fuzzy Spelling Evaluator for e-Learning Systems
ERIC Educational Resources Information Center
Chakraborty, Udit Kr.; Konar, Debanjan; Roy, Samir; Choudhury, Sankhayan
2016-01-01
Evaluating Learners' Response in an e-Learning environment has been the topic of current research in areas of Human Computer Interaction, e-Learning, Education Technology and even Natural Language Processing. The current paper presents a twofold strategy to evaluate single word response of a learner in an e-Learning environment. The response of…
Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B
2015-03-04
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.
Canino-Rodríguez, José M.; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G.; Travieso-González, Carlos; Alonso-Hernández, Jesús B.
2015-01-01
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Computational principles of working memory in sentence comprehension.
Lewis, Richard L; Vasishth, Shravan; Van Dyke, Julie A
2006-10-01
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.
ERIC Educational Resources Information Center
Johnston, Kevin McCullough
2001-01-01
Considers the design of corporate communications for electronic business and discusses the increasing importance of corporate interaction as companies work in virtual environments. Compares sociological and psychological theories of human interaction and relationship formation with organizational interaction theories of corporate relationship…
Affective Computing and the Impact of Gender and Age
Rukavina, Stefanie; Gruss, Sascha; Hoffmann, Holger; Tan, Jun-Wen; Walter, Steffen; Traue, Harald C.
2016-01-01
Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy. PMID:26939129
Linear and non-linear interdependence of EEG and HRV frequency bands in human sleep.
Chaparro-Vargas, Ramiro; Dissanayaka, P Chamila; Patti, Chanakya Reddy; Schilling, Claudia; Schredl, Michael; Cvetkovic, Dean
2014-01-01
The characterisation of functional interdependencies of the autonomic nervous system (ANS) stands an evergrowing interest to unveil electroencephalographic (EEG) and Heart Rate Variability (HRV) interactions. This paper presents a biosignal processing approach as a supportive computational resource in the estimation of sleep dynamics. The application of linear, non-linear methods and statistical tests upon 10 overnight polysomnographic (PSG) recordings, allowed the computation of wavelet coherence and phase locking values, in order to identify discerning features amongst the clinical healthy subjects. Our findings showed that neuronal oscillations θ, α and σ interact with cardiac power bands at mid-to-high rank of coherence and phase locking, particularly during NREM sleep stages.
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.
1974-01-01
The Midas System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in Phase I of the overall program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 2 x 100,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. The MIDAS construction and wiring diagrams are given.
Leveraging Human Insights by Combining Multi-Objective Optimization with Interactive Evolution
2015-03-26
application, a program that used human selections to guide the evolution of insect -like images. He was able to demonstrate that humans provide key insights...LEVERAGING HUMAN INSIGHTS BY COMBINING MULTI-OBJECTIVE OPTIMIZATION WITH INTERACTIVE EVOLUTION THESIS Joshua R. Christman, Second Lieutenant, USAF...COMBINING MULTI-OBJECTIVE OPTIMIZATION WITH INTERACTIVE EVOLUTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering
Graphics Flutter Analysis Methods, an interactive computing system at Lockheed-California Company
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1975-01-01
An interactive computer graphics system, Graphics Flutter Analysis Methods (GFAM), was developed to complement FAMAS, a matrix-oriented batch computing system, and other computer programs in performing complex numerical calculations using a fully integrated data management system. GFAM has many of the matrix operation capabilities found in FAMAS, but on a smaller scale, and is utilized when the analysis requires a high degree of interaction between the engineer and computer, and schedule constraints exclude the use of batch entry programs. Applications of GFAM to a variety of preliminary design, development design, and project modification programs suggest that interactive flutter analysis using matrix representations is a feasible and cost effective computing tool.
The flight robotics laboratory
NASA Technical Reports Server (NTRS)
Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.
1988-01-01
The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.
Computer Assistance for Writing Interactive Programs: TICS.
ERIC Educational Resources Information Center
Kaplow, Roy; And Others
1973-01-01
Investigators developed an on-line, interactive programing system--the Teacher-Interactive Computer System (TICS)--to provide assistance to those who were not programers, but nevertheless wished to write interactive instructional programs. TICS had two components: an author system and a delivery system. Underlying assumptions were that…
Practice and Personhood in Professional Interaction: Social Identities and Information Needs.
ERIC Educational Resources Information Center
Mokros, Hartmut B.; And Others
1995-01-01
Explores the human aspect of information retrieval by examining the behavior and pronoun use of librarians in the course of communicating with patrons during online computer search interactions. Compares two studies on the conduct of librarians as intermediaries in naturally occurring online computer search interactions. (JMV)
Eye Tracking and Head Movement Detection: A State-of-Art Survey
2013-01-01
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A
Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less
The development of the Canadian Mobile Servicing System Kinematic Simulation Facility
NASA Technical Reports Server (NTRS)
Beyer, G.; Diebold, B.; Brimley, W.; Kleinberg, H.
1989-01-01
Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described.
SnapAnatomy, a computer-based interactive tool for independent learning of human anatomy.
Yip, George W; Rajendran, Kanagasuntheram
2008-06-01
Computer-aided instruction materials are becoming increasing popular in medical education and particularly in the teaching of human anatomy. This paper describes SnapAnatomy, a new interactive program that the authors designed for independent learning of anatomy. SnapAnatomy is primarily tailored for the beginner student to encourage the learning of anatomy by developing a three-dimensional visualization of human structure that is essential to applications in clinical practice and the understanding of function. The program allows the student to take apart and to accurately put together body components in an interactive, self-paced and variable manner to achieve the learning outcome.
The Ames Virtual Environment Workstation: Implementation issues and requirements
NASA Technical Reports Server (NTRS)
Fisher, Scott S.; Jacoby, R.; Bryson, S.; Stone, P.; Mcdowall, I.; Bolas, M.; Dasaro, D.; Wenzel, Elizabeth M.; Coler, C.; Kerr, D.
1991-01-01
This presentation describes recent developments in the implementation of a virtual environment workstation in the Aerospace Human Factors Research Division of NASA's Ames Research Center. Introductory discussions are presented on the primary research objectives and applications of the system and on the system's current hardware and software configuration. Principle attention is then focused on unique issues and problems encountered in the workstation's development with emphasis on its ability to meet original design specifications for computational graphics performance and for associated human factors requirements necessary to provide compelling sense of presence and efficient interaction in the virtual environment.
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Kahler, Christopher W; Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L
2017-06-28
Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users' verbal responses, more closely mirroring a human-delivered motivational intervention. We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. ©Christopher W Kahler, William J Lechner, James MacGlashan, Tyler B Wray, Michael L Littman. Originally published in JMIR Mental Health (http://mental.jmir.org), 28.06.2017.
NASA Astrophysics Data System (ADS)
Nakamura, Christopher M.; Murphy, Sytil K.; Christel, Michael G.; Stevens, Scott M.; Zollman, Dean A.
2016-06-01
Computer-automated assessment of students' text responses to short-answer questions represents an important enabling technology for online learning environments. We have investigated the use of machine learning to train computer models capable of automatically classifying short-answer responses and assessed the results. Our investigations are part of a project to develop and test an interactive learning environment designed to help students learn introductory physics concepts. The system is designed around an interactive video tutoring interface. We have analyzed 9 with about 150 responses or less. We observe for 4 of the 9 automated assessment with interrater agreement of 70% or better with the human rater. This level of agreement may represent a baseline for practical utility in instruction and indicates that the method warrants further investigation for use in this type of application. Our results also suggest strategies that may be useful for writing activities and questions that are more appropriate for automated assessment. These strategies include building activities that have relatively few conceptually distinct ways of perceiving the physical behavior of relatively few physical objects. Further success in this direction may allow us to promote interactivity and better provide feedback in online learning systems. These capabilities could enable our system to function more like a real tutor.
Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.
Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E
2017-01-01
This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
Closed-loop bird-computer interactions: a new method to study the role of bird calls.
Lerch, Alexandre; Roy, Pierre; Pachet, François; Nagle, Laurent
2011-03-01
In the field of songbird research, many studies have shown the role of male songs in territorial defense and courtship. Calling, another important acoustic communication signal, has received much less attention, however, because calls are assumed to contain less information about the emitter than songs do. Birdcall repertoire is diverse, and the role of calls has been found to be significant in the area of social interaction, for example, in pair, family, and group cohesion. However, standard methods for studying calls do not allow precise and systematic study of their role in communication. We propose herein a new method to study bird vocal interaction. A closed-loop computer system interacts with canaries, Serinus canaria, by (1) automatically classifying two basic types of canary vocalization, single versus repeated calls, as they are produced by the subject, and (2) responding with a preprogrammed call type recorded from another bird. This computerized animal-machine interaction requires no human interference. We show first that the birds do engage in sustained interactions with the system, by studying the rate of single and repeated calls for various programmed protocols. We then show that female canaries differentially use single and repeated calls. First, they produce significantly more single than repeated calls, and second, the rate of single calls is associated with the context in which they interact, whereas repeated calls are context independent. This experiment is the first illustration of how closed-loop bird-computer interaction can be used productively to study social relationships. © Springer-Verlag 2010
Effects of Visual Cues and Self-Explanation Prompts: Empirical Evidence in a Multimedia Environment
ERIC Educational Resources Information Center
Lin, Lijia; Atkinson, Robert K.; Savenye, Wilhelmina C.; Nelson, Brian C.
2016-01-01
The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load, and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were…
ERIC Educational Resources Information Center
Chatzara, K.; Karagiannidis, C.; Stamatis, D.
2016-01-01
This paper presents an anthropocentric approach in human-machine interaction in the area of self-regulated e-learning. In an attempt to enhance communication mediated through computers for pedagogical use we propose the incorporation of an intelligent emotional agent that is represented by a synthetic character with multimedia capabilities,…
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
The development of a virtual camera system for astronaut-rover planetary exploration.
Platt, Donald W; Boy, Guy A
2012-01-01
A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.
ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.
require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can
Barton, C Michael; Ullah, Isaac I; Bergin, Sean
2010-11-28
The evolution of Mediterranean landscapes during the Holocene has been increasingly governed by the complex interactions of water and human land use. Different land-use practices change the amount of water flowing across the surface and infiltrating the soil, and change water's ability to move surface sediments. Conversely, water amplifies the impacts of human land use and extends the ecological footprint of human activities far beyond the borders of towns and fields. Advances in computational modelling offer new tools to study the complex feedbacks between land use, land cover, topography and surface water. The Mediterranean Landscape Dynamics project (MedLand) is building a modelling laboratory where experiments can be carried out on the long-term impacts of agropastoral land use, and whose results can be tested against the archaeological record. These computational experiments are providing new insights into the socio-ecological consequences of human decisions at varying temporal and spatial scales.
Enhancing Tele-robotics with Immersive Virtual Reality
2017-11-03
graduate and undergraduate students within the Digital Gaming and Simulation, Computer Science, and psychology programs have actively collaborated...investigates the use of artificial intelligence and visual computing. Numerous fields across the human-computer interaction and gaming research areas...invested in digital gaming and simulation to cognitively stimulate humans by computers, forming a $10.5B industry [1]. On the other hand, cognitive
Mehmood, Raja Majid; Lee, Hyo Jong
2017-01-01
Human computer interaction is a growing field in terms of helping people in their daily life to improve their living. Especially, people with some disability may need an interface which is more appropriate and compatible with their needs. Our research is focused on similar kinds of problems, such as students with some mental disorder or mood disruption problems. To improve their learning process, an intelligent emotion recognition system is essential which has an ability to recognize the current emotional state of the brain. Nowadays, in special schools, instructors are commonly use some conventional methods for managing special students for educational purposes. In this paper, we proposed a novel computer aided method for instructors at special schools where they can teach special students with the support of our system using wearable technologies. PMID:28208734
Distributed intelligence for supervisory control
NASA Technical Reports Server (NTRS)
Wolfe, W. J.; Raney, S. D.
1987-01-01
Supervisory control systems must deal with various types of intelligence distributed throughout the layers of control. Typical layers are real-time servo control, off-line planning and reasoning subsystems and finally, the human operator. Design methodologies must account for the fact that the majority of the intelligence will reside with the human operator. Hierarchical decompositions and feedback loops as conceptual building blocks that provide a common ground for man-machine interaction are discussed. Examples of types of parallelism and parallel implementation on several classes of computer architecture are also discussed.
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Sandstrom, Timothy A.; Henze, Chris; Levit, Creon
2003-01-01
This paper presents the hyperwall, a visualization cluster that uses coordinated visualizations for interactive exploration of multidimensional data and simulations. The system strongly leverages the human eye-brain system with a generous 7x7 array offlat panel LCD screens powered by a beowulf clustel: With each screen backed by a workstation class PC, graphic and compute intensive applications can be applied to a broad range of data. Navigational tools are presented that allow for investigation of high dimensional spaces.
NASA Technical Reports Server (NTRS)
1985-01-01
Operational forecasters have habitually been plagued with the problems associated with acquisition, display, and dissemination of data used in preparing forecasts. The centralized storm information system (CSIS) experiment provided an operational forecaster with an interactive computer system which could perform these preliminary tasks more quickly and accurately than any human could. CSIS objectives pertaining to improved severe storms forecasting and warning procedures are addressed.
Computational Modeling and Simulation of Developmental ...
SYNOPSIS: The question of how tissues and organs are shaped during development is crucial for understanding human birth defects. Data from high-throughput screening assays on human stem cells may be utilized predict developmental toxicity with reasonable accuracy. Other types of models are necessary, however, for mechanism-specific analysis because embryogenesis requires precise timing and control. Agent-based modeling and simulation (ABMS) is an approach to virtually reconstruct these dynamics, cell-by-cell and interaction-by-interaction. Using ABMS, HTS lesions from ToxCast can be integrated with patterning systems heuristically to propagate key events This presentation to FDA-CFSAN will update progress on the applications of in silico modeling tools and approaches for assessing developmental toxicity.
A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents
Griol, David
2016-01-01
Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users. PMID:26819592
[Development of automatic urine monitoring system].
Wei, Liang; Li, Yongqin; Chen, Bihua
2014-03-01
An automatic urine monitoring system is presented to replace manual operation. The system is composed of the flow sensor, MSP430f149 single chip microcomputer, human-computer interaction module, LCD module, clock module and memory module. The signal of urine volume is captured when the urine flows through the flow sensor and then displayed on the LCD after data processing. The experiment results suggest that the design of the monitor provides a high stability, accurate measurement and good real-time, and meets the demand of the clinical application.
Deep-reasoning fault diagnosis - An aid and a model
NASA Technical Reports Server (NTRS)
Yoon, Wan Chul; Hammer, John M.
1988-01-01
The design and evaluation are presented for the knowledge-based assistance of a human operator who must diagnose a novel fault in a dynamic, physical system. A computer aid based on a qualitative model of the system was built to help the operators overcome some of their cognitive limitations. This aid differs from most expert systems in that it operates at several levels of interaction that are believed to be more suitable for deep reasoning. Four aiding approaches, each of which provided unique information to the operator, were evaluated. The aiding features were designed to help the human's casual reasoning about the system in predicting normal system behavior (N aiding), integrating observations into actual system behavior (O aiding), finding discrepancies between the two (O-N aiding), or finding discrepancies between observed behavior and hypothetical behavior (O-HN aiding). Human diagnostic performance was found to improve by almost a factor of two with O aiding and O-N aiding.
Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features
NASA Astrophysics Data System (ADS)
Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng
Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.
NASA Astrophysics Data System (ADS)
Pohlmeyer, Eric A.; Fifer, Matthew; Rich, Matthew; Pino, Johnathan; Wester, Brock; Johannes, Matthew; Dohopolski, Chris; Helder, John; D'Angelo, Denise; Beaty, James; Bensmaia, Sliman; McLoughlin, Michael; Tenore, Francesco
2017-05-01
Brain-computer interface (BCI) research has progressed rapidly, with BCIs shifting from animal tests to human demonstrations of controlling computer cursors and even advanced prosthetic limbs, the latter having been the goal of the Revolutionizing Prosthetics (RP) program. These achievements now include direct electrical intracortical microstimulation (ICMS) of the brain to provide human BCI users feedback information from the sensors of prosthetic limbs. These successes raise the question of how well people would be able to use BCIs to interact with systems that are not based directly on the body (e.g., prosthetic arms), and how well BCI users could interpret ICMS information from such devices. If paralyzed individuals could use BCIs to effectively interact with such non-anthropomorphic systems, it would offer them numerous new opportunities to control novel assistive devices. Here we explore how well a participant with tetraplegia can detect infrared (IR) sources in the environment using a prosthetic arm mounted camera that encodes IR information via ICMS. We also investigate how well a BCI user could transition from controlling a BCI based on prosthetic arm movements to controlling a flight simulator, a system with different physical dynamics than the arm. In that test, the BCI participant used environmental information encoded via ICMS to identify which of several upcoming flight routes was the best option. For both tasks, the BCI user was able to quickly learn how to interpret the ICMSprovided information to achieve the task goals.
Image analysis in cytology: DNA-histogramming versus cervical smear prescreening.
Bengtsson, E W; Nordin, B
1993-01-01
The visual inspection of cellular specimens and histological sections through a light microscope plays an important role in clinical medicine and biomedical research. The human visual system is very good at the recognition of various patterns but less efficient at quantitative assessment of these patterns. Some samples are prepared in great numbers, most notably the screening for cervical cancer, the so-called PAP-smears, which results in hundreds of millions of samples each year, creating a tedious mass inspection task. Numerous attempts have been made over the last 40 years to create systems that solve these two tasks, the quantitative supplement to the human visual system and the automation of mass screening. The most difficult task, the total automation, has received the greatest attention with many large scale projects over the decades. In spite of all these efforts, still no generally accepted automated prescreening device exists on the market. The main reason for this failure is the great pattern recognition capabilities needed to distinguish between cancer cells and all other kinds of objects found in the specimens: cellular clusters, debris, degenerate cells, etc. Improved algorithms, the ever-increasing processing power of computers and progress in biochemical specimen preparation techniques make it likely that eventually useful automated prescreening systems will become available. Meanwhile, much less effort has been put into the development of interactive cell image analysis systems. Still, some such systems have been developed and put into use at thousands of laboratories worldwide. In these the human pattern recognition capability is used to select the fields and objects that are to be analysed while the computational power of the computer is used for the quantitative analysis of cellular DNA content or other relevant markers. Numerous studies have shown that the quantitative information about the distribution of cellular DNA content is of prognostic significance in many types of cancer. Several laboratories are therefore putting these techniques into routine clinical use. The more advanced systems can also study many other markers and cellular features, some known to be of clinical interest, others useful in research. The advances in computer technology are making these systems more generally available through decreasing cost, increasing computational power and improved user interfaces. We have been involved in research and development of both automated and interactive cell analysis systems during the last 20 years. Here some experiences and conclusions from this work will be presented as well as some predictions about what can be expected in the near future.
Major component analysis of dynamic networks of physiologic organ interactions
NASA Astrophysics Data System (ADS)
Liu, Kang K. L.; Bartsch, Ronny P.; Ma, Qianli D. Y.; Ivanov, Plamen Ch
2015-09-01
The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function.
DOT National Transportation Integrated Search
2016-01-01
Human attention is a finite resource. When interrupted while performing a task, this : resource is split between two interactive tasks. People have to decide whether the benefits : from the interruptive interaction will be enough to offset the loss o...
Human-computer dialogue: Interaction tasks and techniques. Survey and categorization
NASA Technical Reports Server (NTRS)
Foley, J. D.
1983-01-01
Interaction techniques are described. Six basic interaction tasks, requirements for each task, requirements related to interaction techniques, and a technique's hardware prerequisites affective device selection are discussed.
2015-01-01
The Virtual Teacher paradigm, a version of the Human Dynamic Clamp (HDC), is introduced into studies of learning patterns of inter-personal coordination. Combining mathematical modeling and experimentation, we investigate how the HDC may be used as a Virtual Teacher (VT) to help humans co-produce and internalize new inter-personal coordination pattern(s). Human learners produced rhythmic finger movements whilst observing a computer-driven avatar, animated by dynamic equations stemming from the well-established Haken-Kelso-Bunz (1985) and Schöner-Kelso (1988) models of coordination. We demonstrate that the VT is successful in shifting the pattern co-produced by the VT-human system toward any value (Experiment 1) and that the VT can help humans learn unstable relative phasing patterns (Experiment 2). Using transfer entropy, we find that information flow from one partner to the other increases when VT-human coordination loses stability. This suggests that variable joint performance may actually facilitate interaction, and in the long run learning. VT appears to be a promising tool for exploring basic learning processes involved in social interaction, unraveling the dynamics of information flow between interacting partners, and providing possible rehabilitation opportunities. PMID:26569608
Kostrubiec, Viviane; Dumas, Guillaume; Zanone, Pier-Giorgio; Kelso, J A Scott
2015-01-01
The Virtual Teacher paradigm, a version of the Human Dynamic Clamp (HDC), is introduced into studies of learning patterns of inter-personal coordination. Combining mathematical modeling and experimentation, we investigate how the HDC may be used as a Virtual Teacher (VT) to help humans co-produce and internalize new inter-personal coordination pattern(s). Human learners produced rhythmic finger movements whilst observing a computer-driven avatar, animated by dynamic equations stemming from the well-established Haken-Kelso-Bunz (1985) and Schöner-Kelso (1988) models of coordination. We demonstrate that the VT is successful in shifting the pattern co-produced by the VT-human system toward any value (Experiment 1) and that the VT can help humans learn unstable relative phasing patterns (Experiment 2). Using transfer entropy, we find that information flow from one partner to the other increases when VT-human coordination loses stability. This suggests that variable joint performance may actually facilitate interaction, and in the long run learning. VT appears to be a promising tool for exploring basic learning processes involved in social interaction, unraveling the dynamics of information flow between interacting partners, and providing possible rehabilitation opportunities.
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Karniel, Amir; Avraham, Guy; Peles, Bat-Chen; Levy-Tzedek, Shelly; Nisky, Ilana
2010-01-01
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake. PMID:21206462
NASA Astrophysics Data System (ADS)
Sherwin, Jason
At the start of the 21st century, the topic of complexity remains a formidable challenge in engineering, science and other aspects of our world. It seems that when disaster strikes it is because some complex and unforeseen interaction causes the unfortunate outcome. Why did the financial system of the world meltdown in 2008--2009? Why are global temperatures on the rise? These questions and other ones like them are difficult to answer because they pertain to contexts that require lengthy descriptions. In other words, these contexts are complex. But we as human beings are able to observe and recognize this thing we call 'complexity'. Furthermore, we recognize that there are certain elements of a context that form a system of complex interactions---i.e., a complex system. Many researchers have even noted similarities between seemingly disparate complex systems. Do sub-atomic systems bear resemblance to weather patterns? Or do human-based economic systems bear resemblance to macroscopic flows? Where do we draw the line in their resemblance? These are the kinds of questions that are asked in complex systems research. And the ability to recognize complexity is not only limited to analytic research. Rather, there are many known examples of humans who, not only observe and recognize but also, operate complex systems. How do they do it? Is there something superhuman about these people or is there something common to human anatomy that makes it possible to fly a plane? Or to drive a bus? Or to operate a nuclear power plant? Or to play Chopin's etudes on the piano? In each of these examples, a human being operates a complex system of machinery, whether it is a plane, a bus, a nuclear power plant or a piano. What is the common thread running through these abilities? The study of situational awareness (SA) examines how people do these types of remarkable feats. It is not a bottom-up science though because it relies on finding general principles running through a host of varied human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.
The Next Wave: Humans, Computers, and Redefining Reality
NASA Technical Reports Server (NTRS)
Little, William
2018-01-01
The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.
NASA Technical Reports Server (NTRS)
Kazerooni, H.
1991-01-01
A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In this unique configuration, mechanical power transfer between the human and the extender occurs because the human is pushing against the extender. The extender transfers to the human's hand, in feedback fashion, a scaled-down version of the actual external load which the extender is manipulating. This natural feedback force on the human's hand allows him to 'feel' a modified version of the external forces on the extender. The information signals from the human (e.g., EMG signals) to the computer reflect human cognitive ability, and the power transfer between the human and the machine (e.g., physical interaction) reflects human physical ability. Thus the information transfer to the machine augments cognitive ability, and the power transfer augments motor ability. These two actions are coupled through the human cognitive/motor dynamic behavior. The goal is to derive the control rules for a class of computer-controlled machines that augment human physical and cognitive abilities in certain manipulative tasks.
Assessment of physical activity of the human body considering the thermodynamic system.
Hochstein, Stefan; Rauschenberger, Philipp; Weigand, Bernhard; Siebert, Tobias; Schmitt, Syn; Schlicht, Wolfgang; Převorovská, Světlana; Maršík, František
2016-01-01
Correctly dosed physical activity is the basis of a vital and healthy life, but the measurement of physical activity is certainly rather empirical resulting in limited individual and custom activity recommendations. Certainly, very accurate three-dimensional models of the cardiovascular system exist, however, requiring the numeric solution of the Navier-Stokes equations of the flow in blood vessels. These models are suitable for the research of cardiac diseases, but computationally very expensive. Direct measurements are expensive and often not applicable outside laboratories. This paper offers a new approach to assess physical activity using thermodynamical systems and its leading quantity of entropy production which is a compromise between computation time and precise prediction of pressure, volume, and flow variables in blood vessels. Based on a simplified (one-dimensional) model of the cardiovascular system of the human body, we develop and evaluate a setup calculating entropy production of the heart to determine the intensity of human physical activity in a more precise way than previous parameters, e.g. frequently used energy considerations. The knowledge resulting from the precise real-time physical activity provides the basis for an intelligent human-technology interaction allowing to steadily adjust the degree of physical activity according to the actual individual performance level and thus to improve training and activity recommendations.
Wang, QuanQiu; Li, Li; Xu, Rong
2018-04-18
Colorectal cancer (CRC) is the second leading cause of cancer-related deaths. It is estimated that about half the cases of CRC occurring today are preventable. Recent studies showed that human gut microbiota and their collective metabolic outputs play important roles in CRC. However, the mechanisms by which human gut microbial metabolites interact with host genetics in contributing CRC remain largely unknown. We hypothesize that computational approaches that integrate and analyze vast amounts of publicly available biomedical data have great potential in better understanding how human gut microbial metabolites are mechanistically involved in CRC. Leveraging vast amount of publicly available data, we developed a computational algorithm to predict human gut microbial metabolites for CRC. We validated the prediction algorithm by showing that previously known CRC-associated gut microbial metabolites ranked highly (mean ranking: top 10.52%; median ranking: 6.29%; p-value: 3.85E-16). Moreover, we identified new gut microbial metabolites likely associated with CRC. Through computational analysis, we propose potential roles for tartaric acid, the top one ranked metabolite, in CRC etiology. In summary, our data-driven computation-based study generated a large amount of associations that could serve as a starting point for further experiments to refute or validate these microbial metabolite associations in CRC cancer.
Air Defense: A Computer Game for Research in Human Performance.
1981-07-01
warfare (ANW) threat analysis. M’ajor elements of the threat analysis problem \\\\,erc eoibedded in an interactive air detoense game controlled by a...The game requires sustained attention to a complex and interactive "hostile" environment, provides proper experimental control of relevant variables...AD-A102 725 NAVY PERSONNEL RESEARCH AND DEVELOPMENT CENTER SAN DETC F/6 5/10 AIR DEFENSE: A COMPUTER GAME FOR RESEARCH IN HUMAN PERFORMANCE.(U) JUL
Recent Developments in Interactive and Communicative CALL: Hypermedia and "Intelligent" Systems.
ERIC Educational Resources Information Center
Coughlin, Josette M.
Two recent developments in computer-assisted language learning (CALL), interactive video systems and "intelligent" games, are discussed. Under the first heading, systems combining the use of a computer and video disc player are described, and Compact Discs Interactive (CDI) and Digital Video Interactive (DVI) are reviewed. The…
Action and language integration: from humans to cognitive robots.
Borghi, Anna M; Cangelosi, Angelo
2014-07-01
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.
[An interactive three-dimensional model of the human body].
Liem, S L
2009-01-01
Driven by advanced computer technology, it is now possible to show the human anatomy on a computer. On the internet, the Visible Body programme makes it possible to navigate in all directions through the anatomical structures of the human body, using mouse and keyboard. Visible Body is a wonderful tool to give insight in the human structures, body functions and organs.
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
NASA Astrophysics Data System (ADS)
Croft, William
2016-03-01
Arbib's computational comparative neuroprimatology [1] is a welcome model for cognitive linguists, that is, linguists who ground their models of language in human cognition and language use in social interaction. Arbib argues that language emerged via biological and cultural coevolution [1]; linguistic knowledge is represented by constructions, and semantic representations of linguistic constructions are grounded in embodied perceptual-motor schemas (the mirror system hypothesis). My comments offer some refinements from a linguistic point of view.
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
ERIC Educational Resources Information Center
Howard, Cynthia; Jordan, Pamela; Di Eugenio, Barbara; Katz, Sandra
2017-01-01
Despite a growing need for educational tools that support students at the earliest phases of undergraduate Computer Science (CS) curricula, relatively few such tools exist--the majority being Intelligent Tutoring Systems. Since peer interactions more readily give rise to challenges and negotiations, another way in which students can become more…
Man-machine interfaces in LACIE/ERIPS
NASA Technical Reports Server (NTRS)
Duprey, B. B. (Principal Investigator)
1979-01-01
One of the most important aspects of the interactive portion of the LACIE/ERIPS software system is the way in which the analysis and decision-making capabilities of a human being are integrated with the speed and accuracy of a computer to produce a powerful analysis system. The three major man-machine interfaces in the system are (1) the use of menus for communications between the software and the interactive user; (2) the checkpoint/restart facility to recreate in one job the internal environment achieved in an earlier one; and (3) the error recovery capability which would normally cause job termination. This interactive system, which executes on an IBM 360/75 mainframe, was adapted for use in noninteractive (batch) mode. A case study is presented to show how the interfaces work in practice by defining some fields based on an image screen display, noting the field definitions, and obtaining a film product of the classification map.
Gesture controlled human-computer interface for the disabled.
Szczepaniak, Oskar M; Sawicki, Dariusz J
2017-02-28
The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Social Semantics for an Effective Enterprise
NASA Technical Reports Server (NTRS)
Berndt, Sarah; Doane, Mike
2012-01-01
An evolution of the Semantic Web, the Social Semantic Web (s2w), facilitates knowledge sharing with "useful information based on human contributions, which gets better as more people participate." The s2w reaches beyond the search box to move us from a collection of hyperlinked facts, to meaningful, real time context. When focused through the lens of Enterprise Search, the Social Semantic Web facilitates the fluid transition of meaningful business information from the source to the user. It is the confluence of human thought and computer processing structured with the iterative application of taxonomies, folksonomies, ontologies, and metadata schemas. The importance and nuances of human interaction are often deemphasized when focusing on automatic generation of semantic markup, which results in dissatisfied users and unrealized return on investment. Users consistently qualify the value of information sets through the act of selection, making them the de facto stakeholders of the Social Semantic Web. Employers are the ultimate beneficiaries of s2w utilization with a better informed, more decisive workforce; one not achieved with an IT miracle technology, but by improved human-computer interactions. Johnson Space Center Taxonomist Sarah Berndt and Mike Doane, principal owner of Term Management, LLC discuss the planning, development, and maintenance stages for components of a semantic system while emphasizing the necessity of a Social Semantic Web for the Enterprise. Identification of risks and variables associated with layering the successful implementation of a semantic system are also modeled.
2010-03-01
functionality and plausibility distinguishes this research from most research in computational linguistics and computational psycholinguistics . The... Psycholinguistic Theory There is extensive psycholinguistic evidence that human language processing is essentially incremental and interactive...challenges of psycholinguistic research is to explain how humans can process language effortlessly and accurately given the complexity and ambiguity that is
Assessing the Purpose and Importance University Students Attribute to Current ICT Applications
ERIC Educational Resources Information Center
DiGiuseppe, Maurice; Partosoedarso, Elita
2014-01-01
In this study we surveyed students in a mid-sized university in Ontario, Canada to explore various aspects associated with their use of computer-based applications. For the purpose of analysis, the computer applications under study were categorized according to the Human-Computer-Human Interaction (HCHI) model of Desjardins (2005) in which…
Moore, Jason H; Boczko, Erik M; Summar, Marshall L
2005-02-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.
High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin
2007-01-01
The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.
NASA Technical Reports Server (NTRS)
Kirlik, Alex
1991-01-01
Advances in computer and control technology offer the opportunity for task-offload aiding in human-machine systems. A task-offload aid (e.g., an autopilot, an intelligent assistant) can be selectively engaged by the human operator to dynamically delegate tasks to an automated system. Successful design and performance prediction in such systems requires knowledge of the factors influencing the strategy the operator develops and uses for managing interaction with the task-offload aid. A model is presented that shows how such strategies can be predicted as a function of three task context properties (frequency and duration of secondary tasks and costs of delaying secondary tasks) and three aid design properties (aid engagement and disengagement times, aid performance relative to human performance). Sensitivity analysis indicates how each of these contextual and design factors affect the optimal aid aid usage strategy and attainable system performance. The model is applied to understanding human-automation interaction in laboratory experiments on human supervisory control behavior. The laboratory task allowed subjects freedom to determine strategies for using an autopilot in a dynamic, multi-task environment. Modeling results suggested that many subjects may indeed have been acting appropriately by not using the autopilot in the way its designers intended. Although autopilot function was technically sound, this aid was not designed with due regard to the overall task context in which it was placed. These results demonstrate the need for additional research on how people may strategically manage their own resources, as well as those provided by automation, in an effort to keep workload and performance at acceptable levels.
Dynamic Task Performance, Cohesion, and Communications in Human Groups.
Giraldo, Luis Felipe; Passino, Kevin M
2016-10-01
In the study of the behavior of human groups, it has been observed that there is a strong interaction between the cohesiveness of the group, its performance when the group has to solve a task, and the patterns of communication between the members of the group. Developing mathematical and computational tools for the analysis and design of task-solving groups that are not only cohesive but also perform well is of importance in social sciences, organizational management, and engineering. In this paper, we model a human group as a dynamical system whose behavior is driven by a task optimization process and the interaction between subsystems that represent the members of the group interconnected according to a given communication network. These interactions are described as attractions and repulsions among members. We show that the dynamics characterized by the proposed mathematical model are qualitatively consistent with those observed in real-human groups, where the key aspect is that the attraction patterns in the group and the commitment to solve the task are not static but change over time. Through a theoretical analysis of the system we provide conditions on the parameters that allow the group to have cohesive behaviors, and Monte Carlo simulations are used to study group dynamics for different sets of parameters, communication topologies, and tasks to solve.
Computer vision-based classification of hand grip variations in neurorehabilitation.
Zariffa, José; Steeves, John D
2011-01-01
The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE
Designers' models of the human-computer interface
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Breedin, Sarah D.
1993-01-01
Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.
Network Physiology: How Organ Systems Dynamically Interact
Bartsch, Ronny P.; Liu, Kang K. L.; Bashan, Amir; Ivanov, Plamen Ch.
2015-01-01
We systematically study how diverse physiologic systems in the human organism dynamically interact and collectively behave to produce distinct physiologic states and functions. This is a fundamental question in the new interdisciplinary field of Network Physiology, and has not been previously explored. Introducing the novel concept of Time Delay Stability (TDS), we develop a computational approach to identify and quantify networks of physiologic interactions from long-term continuous, multi-channel physiological recordings. We also develop a physiologically-motivated visualization framework to map networks of dynamical organ interactions to graphical objects encoded with information about the coupling strength of network links quantified using the TDS measure. Applying a system-wide integrative approach, we identify distinct patterns in the network structure of organ interactions, as well as the frequency bands through which these interactions are mediated. We establish first maps representing physiologic organ network interactions and discover basic rules underlying the complex hierarchical reorganization in physiologic networks with transitions across physiologic states. Our findings demonstrate a direct association between network topology and physiologic function, and provide new insights into understanding how health and distinct physiologic states emerge from networked interactions among nonlinear multi-component complex systems. The presented here investigations are initial steps in building a first atlas of dynamic interactions among organ systems. PMID:26555073
SIG -- The Role of Human-Computer Interaction in Next-Generation Control Rooms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Jacques Hugo; Christian Richard
2005-04-01
The purpose of this CHI Special Interest Group (SIG) is to facilitate the convergence between human-computer interaction (HCI) and control room design. HCI researchers and practitioners actively need to infuse state-of-the-art interface technology into control rooms to meet usability, safety, and regulatory requirements. This SIG outlines potential HCI contributions to instrumentation and control (I&C) and automation in control rooms as well as to general control room design.
An Empirical Study of User Experience on Touch Mice
ERIC Educational Resources Information Center
Chou, Jyh Rong
2016-01-01
The touch mouse is a new type of computer mouse that provides users with a new way of touch-based environment to interact with computers. For more than a decade, user experience (UX) has grown into a core concept of human-computer interaction (HCI), describing a user's perceptions and responses that result from the use of a product in a particular…
A roadmap to computational social neuroscience.
Tognoli, Emmanuelle; Dumas, Guillaume; Kelso, J A Scott
2018-02-01
To complement experimental efforts toward understanding human social interactions at both neural and behavioral levels, two computational approaches are presented: (1) a fully parameterizable mathematical model of a social partner, the Human Dynamic Clamp which, by virtue of experimentally controlled interactions between Virtual Partners and real people, allows for emergent behaviors to be studied; and (2) a multiscale neurocomputational model of social coordination that enables exploration of social self-organization at all levels-from neuronal patterns to people interacting with each other. These complementary frameworks and the cross product of their analysis aim at understanding the fundamental principles governing social behavior.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming
2015-08-01
The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.
A psychotechnological review on eye-tracking systems: towards user experience.
Mele, Maria Laura; Federici, Stefano
2012-07-01
The aim of the present work is to show a critical review of the international literature on eye-tracking technologies by focusing on those features that characterize them as 'psychotechnologies'. A critical literature review was conducted through the main psychology, engineering, and computer sciences databases by following specific inclusion and exclusion criteria. A total of 46 matches from 1998 to 2010 were selected for content analysis. Results have been divided into four broad thematic areas. We found that, although there is a growing attention to end-users, most of the studies reviewed in this work are far from being considered as adopting holistic human-computer interaction models that include both individual differences and needs of users. User is often considered only as a measurement object of the functioning of the technological system and not as a real alter-ego of the intrasystemic interaction. In order to fully benefit from the communicative functions of gaze, the research on eye-tracking must emphasize user experience. Eye-tracking systems would become an effective assistive technology for integration, adaptation and neutralization of the environmental barrier only when a holistic model can be applied for both design processes and assessment of the functional components of the interaction.
Central mechanisms for force and motion--towards computational synthesis of human movement.
Hemami, Hooshang; Dariush, Behzad
2012-12-01
Anatomical, physiological and experimental research on the human body can be supplemented by computational synthesis of the human body for all movement: routine daily activities, sports, dancing, and artistic and exploratory involvements. The synthesis requires thorough knowledge about all subsystems of the human body and their interactions, and allows for integration of known knowledge in working modules. It also affords confirmation and/or verification of scientific hypotheses about workings of the central nervous system (CNS). A simple step in this direction is explored here for controlling the forces of constraint. It requires co-activation of agonist-antagonist musculature. The desired trajectories of motion and the force of contact have to be provided by the CNS. The spinal control involves projection onto a muscular subset that induces the force of contact. The projection of force in the sensory motor cortex is implemented via a well-defined neural population unit, and is executed in the spinal cord by a standard integral controller requiring input from tendon organs. The sensory motor cortex structure is extended to the case for directing motion via two neural population units with vision input and spindle efferents. Digital computer simulations show the feasibility of the system. The formulation is modular and can be extended to multi-link limbs, robot and humanoid systems with many pairs of actuators or muscles. It can be expanded to include reticular activating structures and learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Natural interaction for unmanned systems
NASA Astrophysics Data System (ADS)
Taylor, Glenn; Purman, Ben; Schermerhorn, Paul; Garcia-Sampedro, Guillermo; Lanting, Matt; Quist, Michael; Kawatsu, Chris
2015-05-01
Military unmanned systems today are typically controlled by two methods: tele-operation or menu-based, search-andclick interfaces. Both approaches require the operator's constant vigilance: tele-operation requires constant input to drive the vehicle inch by inch; a menu-based interface requires eyes on the screen in order to search through alternatives and select the right menu item. In both cases, operators spend most of their time and attention driving and minding the unmanned systems rather than on being a warfighter. With these approaches, the platform and interface become more of a burden than a benefit. The availability of inexpensive sensor systems in products such as Microsoft Kinect™ or Nintendo Wii™ has resulted in new ways of interacting with computing systems, but new sensors alone are not enough. Developing useful and usable human-system interfaces requires understanding users and interaction in context: not just what new sensors afford in terms of interaction, but how users want to interact with these systems, for what purpose, and how sensors might enable those interactions. Additionally, the system needs to reliably make sense of the user's inputs in context, translate that interpretation into commands for the unmanned system, and give feedback to the user. In this paper, we describe an example natural interface for unmanned systems, called the Smart Interaction Device (SID), which enables natural two-way interaction with unmanned systems including the use of speech, sketch, and gestures. We present a few example applications SID to different types of unmanned systems and different kinds of interactions.
Working Memory Load Strengthens Reward Prediction Errors.
Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David
2017-04-19
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.
Digital item for digital human memory--television commerce application: family tree albuming system
NASA Astrophysics Data System (ADS)
Song, Jaeil; Lee, Hyejoo; Hong, JinWoo
2004-01-01
Technical advance in creating, storing digital media in daily life enables computers to capture human life and remember it as people do. A critical point with digitizing human life is how to recall bits of experience that are associated by semantic information. This paper proposes a technique for structuring dynamic digital object based on MPEG-21 Digital Item (DI) in order to recall human"s memory and providing interactive TV service on family tree albuming system as one of its applications. DIs are a dynamically reconfigurable, uniquely identified, described by a descriptor language, logical unit for structuring relationship among multiple media resources. Digital Item Processing (DIP) provides the means to interact with DIs to remind context to user, with active properties where objects have executable properties. Each user can adapt DIs" active properties to tailor the behavior of DIs to match his/her own specific needs. DIs" technologies in Intellectual Property Management and Protection (IPMP) can be used for privacy protection. In the interaction between the social space and technological space, the internal dynamics of family life fits well sharing family albuming service via family television. Family albuming service can act as virtual communities builders for family members. As memory is shared between family members, multiple annotations (including active properties on contextual information) will be made with snowballing value.
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
Wargaming and interactive color graphics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bly, S.; Buzzell, C.; Smith, G.
1980-08-04
JANUS is a two-sided interactive color graphic simulation in which human commanders can direct their forces, each trying to accomplish their mission. This competitive synthetic battlefield is used to explore the range of human ingenuity under conditions of incomplete information about enemy strength and deployment. Each player can react to new situations by planning new unit movements, using conventional and nuclear weapons, or modifying unit objectives. Conventional direct fire among tanks, infantry fighting vehicles, helicopters, and other units is automated subject to constraints of target acquisition, reload rate, range, suppression, etc. Artillery and missile indirect fire systems deliver conventional munitions,more » smoke, and nuclear weapons. Players use reconnaissance units, helicopters, or fixed wing aircraft to search for enemy unit locations. Counter-battery radars acquire enemy artillery. The JANUS simulation at LLL has demonstrated the value of the computer as a sophisticated blackboard. A small dedicated minicomputer is adequate for detailed calculations, and may be preferable to sharing a more powerful machine. Real-time color interactive graphics are essential to allow realistic command decision inputs. Competitive human-versus-human synthetic experiences are intense and well-remembered. 2 figures.« less
Use of parallel computing for analyzing big data in EEG studies of ambiguous perception
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.
2018-02-01
Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.
Computational dynamic approaches for temporal omics data with applications to systems medicine.
Liang, Yulan; Kelemen, Arpad
2017-01-01
Modeling and predicting biological dynamic systems and simultaneously estimating the kinetic structural and functional parameters are extremely important in systems and computational biology. This is key for understanding the complexity of the human health, drug response, disease susceptibility and pathogenesis for systems medicine. Temporal omics data used to measure the dynamic biological systems are essentials to discover complex biological interactions and clinical mechanism and causations. However, the delineation of the possible associations and causalities of genes, proteins, metabolites, cells and other biological entities from high throughput time course omics data is challenging for which conventional experimental techniques are not suited in the big omics era. In this paper, we present various recently developed dynamic trajectory and causal network approaches for temporal omics data, which are extremely useful for those researchers who want to start working in this challenging research area. Moreover, applications to various biological systems, health conditions and disease status, and examples that summarize the state-of-the art performances depending on different specific mining tasks are presented. We critically discuss the merits, drawbacks and limitations of the approaches, and the associated main challenges for the years ahead. The most recent computing tools and software to analyze specific problem type, associated platform resources, and other potentials for the dynamic trajectory and interaction methods are also presented and discussed in detail.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
A real-time compliance mapping system using standard endoscopic surgical forceps.
Fakhry, Morkos; Bello, Fernando; Hanna, George B
2009-04-01
In endoscopic surgery, the use of long surgical instruments through access ports diminishes tactile feedback and degrades the surgeon's ability to identify hidden tissue abnormalities. To overcome this constraint, we developed a real-time compliance mapping system that is composed of: 1) a standard surgical instrument with a high-precision sensor configuration design; 2) real-time objective interpretation of the output signals for tissue identification; and 3) a novel human-computer interaction technique using interactive voice and handle force monitoring techniques to suit operating theater working environment. The system was calibrated and used in clinical practice in four routine endoscopic human procedures. In a laboratory-based experiment to compare the tissue discriminatory power of the system with that of surgeons' hands, the system's tissue discriminatory power was three times more sensitive and 10% less specific. The data acquisition precision was tested using principal component analysis (R(2)X = 0.975, Q2 [cumulative (cum)] = 0.808 ) and partial least square discriminate analysis (R(2)X = 0.903, R(2)Y = 0.729, Q2 (cum) = 0.572).
Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J
2015-07-28
Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.
An overview of computer-based natural language processing
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1983-01-01
Computer based Natural Language Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in natural language (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial natural language interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.
McKay, E
2000-01-01
An innovative research program was devised to investigate the interactive effect of instructional strategies enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style on the acquisition of programming concepts. The Cognitive Styles Analysis (CSA) program (Riding,1991) was used to establish the participants' cognitive style. The QUEST Interactive Test Analysis System (Adams and Khoo,1996) provided the cognitive performance measuring tool, which ensured an absence of error measurement in the programming knowledge testing instruments. Therefore, reliability of the instrumentation was assured through the calibration techniques utilized by the QUEST estimate; providing predictability of the research design. A means analysis of the QUEST data, using the Cohen (1977) approach to size effect and statistical power further quantified the significance of the findings. The experimental methodology adopted for this research links the disciplines of instructional science, cognitive psychology, and objective measurement to provide reliable mechanisms for beneficial use in the evaluation of cognitive performance by the education, training and development sectors. Furthermore, the research outcomes will be of interest to educators, cognitive psychologists, communications engineers, and computer scientists specializing in computer-human interactions.
Knowledge Engineering Aspects of Affective Bi-Modal Educational Applications
NASA Astrophysics Data System (ADS)
Alepis, Efthymios; Virvou, Maria; Kabassi, Katerina
This paper analyses the knowledge and software engineering aspects of educational applications that provide affective bi-modal human-computer interaction. For this purpose, a system that provides affective interaction based on evidence from two different modes has been developed. More specifically, the system's inferences about students' emotions are based on user input evidence from the keyboard and the microphone. Evidence from these two modes is combined by a user modelling component that incorporates user stereotypes as well as a multi criteria decision making theory. The mechanism that integrates the inferences from the two modes has been based on the results of two empirical studies that were conducted in the context of knowledge engineering of the system. The evaluation of the developed system showed significant improvements in the recognition of the emotional states of users.
Analysis of Feedback in after Action Reviews
1987-06-01
CONNTSM Page INTRODUCTIUN . . . . . . . . . . . . . . . . . . . A Perspective on Feedback. . ....... • • ..... • 1 Overviev of %,•urrent Research...part of their training program . The AAR is in marked contrast to the critique method of feedback which is often used in military training. The AAR...feedback is task-inherent feedback. Task-inherent feedback refers to human-machine interacting systems, e.g., computers , where in a visual tracking task
Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction
1993-06-01
of Psychology Los Angesles, CA 90024 Pittsburgh, PA 15213 Dr. Eduardo Cascallar Dr. Ruth Chabay Dr. Paul G. Chapin Educational Testing Service CDEC...Sharon Deny Educational Testing Service Applied Science Associates Florida State University Mail Stop 22-T P.O. Box 1072 Dept. of Psychology ...Department of Psychology , Department of Mathematical Sciences, and the Institute for Intelligent Systems Mailing address: Arthur C. Graesser Department.of
Learner Assessment Methods Using a Computer Based Interactive Videodisc System.
ERIC Educational Resources Information Center
Ehrlich, Lisa R.
This paper focuses on item design considerations faced by instructional designers and evaluators when using computer videodisc delivery systems as a means of assessing learner comprehension and competencies. Media characteristics of various interactive computer/videodisc training systems are briefly discussed as well as reasons for using such…
A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.
Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D
2011-09-01
Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
DeMott, Diana
2013-01-01
Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.
ERIC Educational Resources Information Center
Lonchamp, Jacques
2010-01-01
Computer-based interaction analysis (IA) is an automatic process that aims at understanding a computer-mediated activity. In a CSCL system, computer-based IA can provide information directly to learners for self-assessment and regulation and to tutors for coaching support. This article proposes a customizable computer-based IA approach for a…
NASA Astrophysics Data System (ADS)
Brandic, Ivona; Music, Dejan; Dustdar, Schahram
Nowadays, novel computing paradigms as for example Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this chapter we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Cloud services bridging the gap between current QoS models and Cloud middleware and representing important prerequisites for the establishment of autonomic Cloud services.
Proactive learning for artificial cognitive systems
NASA Astrophysics Data System (ADS)
Lee, Soo-Young
2010-04-01
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
HU, TING; DARABOS, CHRISTIAN; CRICCO, MARIA E.; KONG, EMILY; MOORE, JASON H.
2014-01-01
The large volume of GWAS data poses great computational challenges for analyzing genetic interactions associated with common human diseases. We propose a computational framework for characterizing epistatic interactions among large sets of genetic attributes in GWAS data. We build the human phenotype network (HPN) and focus around a disease of interest. In this study, we use the GLAUGEN glaucoma GWAS dataset and apply the HPN as a biological knowledge-based filter to prioritize genetic variants. Then, we use the statistical epistasis network (SEN) to identify a significant connected network of pairwise epistatic interactions among the prioritized SNPs. These clearly highlight the complex genetic basis of glaucoma. Furthermore, we identify key SNPs by quantifying structural network characteristics. Through functional annotation of these key SNPs using Biofilter, a software accessing multiple publicly available human genetic data sources, we find supporting biomedical evidences linking glaucoma to an array of genetic diseases, proving our concept. We conclude by suggesting hypotheses for a better understanding of the disease. PMID:25592582
Automatic prediction of facial trait judgments: appearance vs. structural models.
Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi
2011-01-01
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.
2012-01-01
Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108
NASA Astrophysics Data System (ADS)
Setscheny, Stephan
The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors
Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S.; Raimondi, Manuela T.; Gottardi, Riccardo
2016-01-01
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized. PMID:27669413
Distributed and Lumped Parameter Models for the Characterization of High Throughput Bioreactors.
Iannetti, Laura; D'Urso, Giovanna; Conoscenti, Gioacchino; Cutrì, Elena; Tuan, Rocky S; Raimondi, Manuela T; Gottardi, Riccardo; Zunino, Paolo
Next generation bioreactors are being developed to generate multiple human cell-based tissue analogs within the same fluidic system, to better recapitulate the complexity and interconnection of human physiology [1, 2]. The effective development of these devices requires a solid understanding of their interconnected fluidics, to predict the transport of nutrients and waste through the constructs and improve the design accordingly. In this work, we focus on a specific model of bioreactor, with multiple input/outputs, aimed at generating osteochondral constructs, i.e., a biphasic construct in which one side is cartilaginous in nature, while the other is osseous. We next develop a general computational approach to model the microfluidics of a multi-chamber, interconnected system that may be applied to human-on-chip devices. This objective requires overcoming several challenges at the level of computational modeling. The main one consists of addressing the multi-physics nature of the problem that combines free flow in channels with hindered flow in porous media. Fluid dynamics is also coupled with advection-diffusion-reaction equations that model the transport of biomolecules throughout the system and their interaction with living tissues and C constructs. Ultimately, we aim at providing a predictive approach useful for the general organ-on-chip community. To this end, we have developed a lumped parameter approach that allows us to analyze the behavior of multi-unit bioreactor systems with modest computational effort, provided that the behavior of a single unit can be fully characterized.
Corti, Kevin; Gillespie, Alex
2015-01-01
We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066
Lin, Hsien-Cheng; Chiu, Yu-Hsien; Chen, Yenming J; Wuang, Yee-Pay; Chen, Chiu-Ping; Wang, Chih-Chung; Huang, Chien-Ling; Wu, Tang-Meng; Ho, Wen-Hsien
2017-11-01
This study developed an interactive computer game-based visual perception learning system for special education children with developmental delay. To investigate whether perceived interactivity affects continued use of the system, this study developed a theoretical model of the process in which learners decide whether to continue using an interactive computer game-based visual perception learning system. The technology acceptance model, which considers perceived ease of use, perceived usefulness, and perceived playfulness, was extended by integrating perceived interaction (i.e., learner-instructor interaction and learner-system interaction) and then analyzing the effects of these perceptions on satisfaction and continued use. Data were collected from 150 participants (rehabilitation therapists, medical paraprofessionals, and parents of children with developmental delay) recruited from a single medical center in Taiwan. Structural equation modeling and partial-least-squares techniques were used to evaluate relationships within the model. The modeling results indicated that both perceived ease of use and perceived usefulness were positively associated with both learner-instructor interaction and learner-system interaction. However, perceived playfulness only had a positive association with learner-system interaction and not with learner-instructor interaction. Moreover, satisfaction was positively affected by perceived ease of use, perceived usefulness, and perceived playfulness. Thus, satisfaction positively affects continued use of the system. The data obtained by this study can be applied by researchers, designers of computer game-based learning systems, special education workers, and medical professionals. Copyright © 2017 Elsevier B.V. All rights reserved.
Neilson, Peter D; Neilson, Megan D
2005-09-01
Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.
feature extraction, human-computer interaction, and physics-based modeling. Professional Experience 2009 ., computer science, University of Colorado at Boulder M.S., computer science, University of Colorado at Boulder B.S., computer science, New Mexico Institute of Mining and Technology
Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-01-01
Background When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. Methodology/Principal Findings By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
Can machines think? Interaction and perspective taking with robots investigated via fMRI.
Krach, Sören; Hegel, Frank; Wrede, Britta; Sagerer, Gerhard; Binkofski, Ferdinand; Kircher, Tilo
2008-07-09
When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants' ToM associated cortical activity. By means of functional magnetic resonance imaging (subjects n = 20) we investigated cortical activity modulation during highly interactive human-robot game. Increasing degrees of human-likeness for the game partner were introduced by means of a computer partner, a functional robot, an anthropomorphic robot and a human partner. The classical iterated prisoner's dilemma game was applied as experimental task which allowed for an implicit detection of ToM associated cortical activity. During the experiment participants always played against a random sequence unknowingly to them. Irrespective of the surmised interaction partners' responses participants indicated having experienced more fun and competition in the interaction with increasing human-like features of their partners. Parametric modulation of the functional imaging data revealed a highly significant linear increase of cortical activity in the medial frontal cortex as well as in the right temporo-parietal junction in correspondence with the increase of human-likeness of the interaction partner (computer
Wearable health monitoring using capacitive voltage-mode Human Body Communication.
Maity, Shovan; Das, Debayan; Sen, Shreyas
2017-07-01
Rapid miniaturization and cost reduction of computing, along with the availability of wearable and implantable physiological sensors have led to the growth of human Body Area Network (BAN) formed by a network of such sensors and computing devices. One promising application of such a network is wearable health monitoring where the collected data from the sensors would be transmitted and analyzed to assess the health of a person. Typically, the devices in a BAN are connected through wireless (WBAN), which suffers from energy inefficiency due to the high-energy consumption of wireless transmission. Human Body Communication (HBC) uses the relatively low loss human body as the communication medium to connect these devices, promising order(s) of magnitude better energy-efficiency and built-in security compared to WBAN. In this paper, we demonstrate a health monitoring device and system built using Commercial-Off-The-Shelf (COTS) sensors and components, that can collect data from physiological sensors and transmit it through a) intra-body HBC to another device (hub) worn on the body or b) upload health data through HBC-based human-machine interaction to an HBC capable machine. The system design constraints and signal transfer characteristics for the implemented HBC-based wearable health monitoring system are measured and analyzed, showing reliable connectivity with >8× power savings compared to Bluetooth low-energy (BTLE).
NASA Technical Reports Server (NTRS)
Grantham, C.
1979-01-01
The Interactive Software Invocation (ISIS), an interactive data management system, was developed to act as a buffer between the user and host computer system. The user is provided by ISIS with a powerful system for developing software or systems in the interactive environment. The user is protected from the idiosyncracies of the host computer system by providing such a complete range of capabilities that the user should have no need for direct access to the host computer. These capabilities are divided into four areas: desk top calculator, data editor, file manager, and tool invoker.
An Overview of Computer-Based Natural Language Processing.
ERIC Educational Resources Information Center
Gevarter, William B.
Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…
Introduction to This Special Issue on Context-Aware Computing.
ERIC Educational Resources Information Center
Moran, Thomas P.; Dourish, Paul
2001-01-01
Discusses pervasive, or ubiquitous, computing; explains the notion of context; and defines context-aware computing as the key to disperse and enmesh computation into our lives. Considers context awareness in human-computer interaction and describes the broad topic areas of the essays included in this special issue. (LRW)
User interface design principles for the SSM/PMAD automated power system
NASA Technical Reports Server (NTRS)
Jakstas, Laura M.; Myers, Chris J.
1991-01-01
Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.
Novel 3-D Computer Model Can Help Predict Pathogens’ Roles in Cancer | Poster
To understand how bacterial and viral infections contribute to human cancers, four NCI at Frederick scientists turned not to the lab bench, but to a computer. The team has created the world’s first—and currently, only—3-D computational approach for studying interactions between pathogen proteins and human proteins based on a molecular adaptation known as interface mimicry.
Computational Intelligence Techniques for Tactile Sensing Systems
Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo
2014-01-01
Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach. PMID:24949646
Computational intelligence techniques for tactile sensing systems.
Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo
2014-06-19
Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
Design of Web-based Management Information System for Academic Degree & Graduate Education
NASA Astrophysics Data System (ADS)
Duan, Rui; Zhang, Mingsheng
For every organization, the management information system is not only a computer-based human-machine system that can support and help the administrative supervisor but also an open technology system for society. It should supply the interaction function that face the organization and environment, besides gather, transmit and save the information. The authors starts with the intension of contingency theory and design a web-based management information system for academic degree & graduate education which is based on analyzing of work flow of domestic academic degree and graduate education system. What's more, the application of the system is briefly introduced in this paper.
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
ERIC Educational Resources Information Center
Tardif-Williams, Christine Y.; Owen, Frances; Feldman, Maurice; Tarulli, Donato; Griffiths, Dorothy; Sales, Carol; McQueen-Fuentes, Glenys; Stoner, Karen
2007-01-01
We tested the effectiveness of an interactive, video CD-ROM in teaching persons with intellectual disabilities (ID) about their human rights. Thirty-nine participants with ID were trained using both a classroom activity-based version of the training program and the interactive CD-ROM in a counterbalanced presentation. All individuals were pre- and…
Advanced systems biology methods in drug discovery and translational biomedicine.
Zou, Jun; Zheng, Ming-Wu; Li, Gen; Su, Zhi-Guang
2013-01-01
Systems biology is in an exponential development stage in recent years and has been widely utilized in biomedicine to better understand the molecular basis of human disease and the mechanism of drug action. Here, we discuss the fundamental concept of systems biology and its two computational methods that have been commonly used, that is, network analysis and dynamical modeling. The applications of systems biology in elucidating human disease are highlighted, consisting of human disease networks, treatment response prediction, investigation of disease mechanisms, and disease-associated gene prediction. In addition, important advances in drug discovery, to which systems biology makes significant contributions, are discussed, including drug-target networks, prediction of drug-target interactions, investigation of drug adverse effects, drug repositioning, and drug combination prediction. The systems biology methods and applications covered in this review provide a framework for addressing disease mechanism and approaching drug discovery, which will facilitate the translation of research findings into clinical benefits such as novel biomarkers and promising therapies.
[Characteristics of autonomic status in employees working with computers].
Vlasova, E M; Zaĭtseva, N V; Maliutina, N N
2011-01-01
Human evolution is accompanied by "sensible thoughts" spread to all spheres of occupational activities. One can hardly find an industrial enterprise without computers. In contemporary industry, health care in conditions of humans and computers interaction and evaluation of harm in computer users remain topical. Social and occupational environment is not always comfortable for human body. Changes is occupational conditions, with wide use of computer technologies, decrease role of manual labour and increase role of intellectual work from the one hand, but from the other hand, chasing economic profit alters individual "comfort zone" due to constant psychoemotional stress and causes "burnout". Being healthy in constant stress is impossible.
Design Guidance for Computer-Based Procedures for Field Workers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron
Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, and Naser, 2009; Le Blanc, Oxstrand, and Waicosky, 2012). This report provides design guidance to be used when designing the human-system interaction and the design of the graphical user interface for a CBP system. The guidance is based on human factors research related to the design and usability of CBPs conducted by Idaho National Laboratory, 2012 - 2016.« less
2011-01-01
Background Although principles based in motor learning, rehabilitation, and human-computer interfaces can guide the design of effective interactive systems for rehabilitation, a unified approach that connects these key principles into an integrated design, and can form a methodology that can be generalized to interactive stroke rehabilitation, is presently unavailable. Results This paper integrates phenomenological approaches to interaction and embodied knowledge with rehabilitation practices and theories to achieve the basis for a methodology that can support effective adaptive, interactive rehabilitation. Our resulting methodology provides guidelines for the development of an action representation, quantification of action, and the design of interactive feedback. As Part I of a two-part series, this paper presents key principles of the unified approach. Part II then describes the application of this approach within the implementation of the Adaptive Mixed Reality Rehabilitation (AMRR) system for stroke rehabilitation. Conclusions The accompanying principles for composing novel mixed reality environments for stroke rehabilitation can advance the design and implementation of effective mixed reality systems for the clinical setting, and ultimately be adapted for home-based application. They furthermore can be applied to other rehabilitation needs beyond stroke. PMID:21875441
Guidelines for developing distributed virtual environment applications
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.
1998-08-01
We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.
Computer graphics application in the engineering design integration system
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.
1975-01-01
The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.
Interactive systems design and synthesis of future spacecraft concepts
NASA Technical Reports Server (NTRS)
Wright, R. L.; Deryder, D. D.; Ferebee, M. J., Jr.
1984-01-01
An interactive systems design and synthesis is performed on future spacecraft concepts using the Interactive Design and Evaluation of Advanced spacecraft (IDEAS) computer-aided design and analysis system. The capabilities and advantages of the systems-oriented interactive computer-aided design and analysis system are described. The synthesis of both large antenna and space station concepts, and space station evolutionary growth is demonstrated. The IDEAS program provides the user with both an interactive graphics and an interactive computing capability which consists of over 40 multidisciplinary synthesis and analysis modules. Thus, the user can create, analyze and conduct parametric studies and modify Earth-orbiting spacecraft designs (space stations, large antennas or platforms, and technologically advanced spacecraft) at an interactive terminal with relative ease. The IDEAS approach is useful during the conceptual design phase of advanced space missions when a multiplicity of parameters and concepts must be analyzed and evaluated in a cost-effective and timely manner.
A prisoner's dilemma experiment on cooperation with people and human-like computers.
Kiesler, S; Sproull, L; Waters, K
1996-01-01
The authors investigated basic properties of social exchange and interaction with technology in an experiment on cooperation with a human-like computer partner or a real human partner. Talking with a computer partner may trigger social identity feelings or commitment norms. Participants played a prisoner's dilemma game with a confederate or a computer partner. Discussion, inducements to make promises, and partner cooperation varied across trials. On Trial 1, after discussion, most participants proposed cooperation. They kept their promises as much with a text-only computer as with a person, but less with a more human-like computer. Cooperation dropped sharply when any partner avoided discussion. The strong impact of discussion fits a social contract explanation of cooperation following discussion. Participants broke their promises to a computer more than to a person, however, indicating that people make heterogeneous commitments.
Self-Organization: Complex Dynamical Systems in the Evolution of Speech
NASA Astrophysics Data System (ADS)
Oudeyer, Pierre-Yves
Human vocalization systems are characterized by complex structural properties. They are combinatorial, based on the systematic reuse of phonemes, and the set of repertoires in human languages is characterized by both strong statistical regularities—universals—and a great diversity. Besides, they are conventional codes culturally shared in each community of speakers. What are the origins of the forms of speech? What are the mechanisms that permitted their evolution in the course of phylogenesis and cultural evolution? How can a shared speech code be formed in a community of individuals? This chapter focuses on the way the concept of self-organization, and its interaction with natural selection, can throw light on these three questions. In particular, a computational model is presented which shows that a basic neural equipment for adaptive holistic vocal imitation, coupling directly motor and perceptual representations in the brain, can generate spontaneously shared combinatorial systems of vocalizations in a society of babbling individuals. Furthermore, we show how morphological and physiological innate constraints can interact with these self-organized mechanisms to account for both the formation of statistical regularities and diversity in vocalization systems.
ERIC Educational Resources Information Center
McCartney, Robert; Tenenberg, Josh
2008-01-01
Some have proposed that realistic problem situations are better for learning. This issue contains two articles that examine the effects of "making it real" in computer architecture and human-computer interaction.
Genomics and transcriptomics in drug discovery.
Dopazo, Joaquin
2014-02-01
The popularization of genomic high-throughput technologies is causing a revolution in biomedical research and, particularly, is transforming the field of drug discovery. Systems biology offers a framework to understand the extensive human genetic heterogeneity revealed by genomic sequencing in the context of the network of functional, regulatory and physical protein-drug interactions. Thus, approaches to find biomarkers and therapeutic targets will have to take into account the complex system nature of the relationships of the proteins with the disease. Pharmaceutical companies will have to reorient their drug discovery strategies considering the human genetic heterogeneity. Consequently, modeling and computational data analysis will have an increasingly important role in drug discovery. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Augmented reality in laparoscopic surgical oncology.
Nicolau, Stéphane; Soler, Luc; Mutter, Didier; Marescaux, Jacques
2011-09-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room. Copyright © 2011 Elsevier Ltd. All rights reserved.