Sample records for multimodal human-computer dialogue

  1. An intelligent multi-media human-computer dialogue system

    NASA Technical Reports Server (NTRS)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  2. The Integrated Mission-Planning Station: Functional Requirements, Aviator-Computer Dialogue, and Human Engineering Design Criteria.

    DTIC Science & Technology

    1983-08-01

    AD- R136 99 THE INTEGRATED MISSION-PLNNING STATION: FUNCTIONAL 1/3 REQUIREMENTS AVIATOR-..(U) RNACAPR SCIENCES INC SANTA BARBARA CA S P ROGERS RUG...Continue on reverse side o necess.ar and identify by btock number) Interactive Systems Aviation Control-Display Functional Require- Plan-Computer...Dialogue Avionics Systems ments Map Display Army Aviation Design Criteria Helicopters M4ission Planning Cartography Digital Map Human Factors Navigation

  3. Language evolution and human-computer interaction

    NASA Technical Reports Server (NTRS)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  4. Using a Dialogue System Based on Dialogue Maps for Computer Assisted Second Language Learning

    ERIC Educational Resources Information Center

    Choi, Sung-Kwon; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    In order to use dialogue systems for computer assisted second-language learning systems, one of the difficult issues in such systems is how to construct large-scale dialogue knowledge that matches the dialogue modelling of a dialogue system. This paper describes how we have accomplished the short-term construction of large-scale and…

  5. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  6. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  7. Designing User-Computer Dialogues: Basic Principles and Guidelines.

    ERIC Educational Resources Information Center

    Harrell, Thomas H.

    This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…

  8. Computer-aided psychotherapy based on multimodal elicitation, estimation and regulation of emotion.

    PubMed

    Cosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-09-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive Simulation System at Faculty of Electrical Engineering and Computing, University of Zagreb in the stress resilience. Patient's distortion in emotional processing of multimodal input stimuli is predominantly consequence of his/her cognitive deficit which is result of their individual mental health disorders. These emotional distortions in patient's multimodal physiological, facial, acoustic, and linguistic features related to presented stimulation can be used as indicator of patient's mental illness. Real-time processing and analysis of patient's multimodal response related to annotated input stimuli is based on appropriate machine learning methods from computer science. Comprehensive longitudinal multimodal analysis of patient's emotion, mood, feelings, attention, motivation, decision-making, and working memory in synchronization with multimodal stimuli provides extremely valuable big database for data mining, machine learning and machine reasoning. Presented multimedia stimuli sequence includes personalized images, movies and sounds, as well as semantically congruent narratives. Simultaneously, with stimuli presentation patient provides subjective emotional ratings of presented stimuli in terms of subjective units of discomfort/distress, discrete emotions, or valence and arousal. These subjective emotional ratings of input stimuli and corresponding physiological, speech, and facial output features provides enough information for evaluation of patient's cognitive appraisal deficit

  9. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    NASA Astrophysics Data System (ADS)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  10. Human-Computer Interaction in Tactical Operations: Designing for Effective Human-Computer Dialogue

    DTIC Science & Technology

    1990-09-01

    developing re-usable interface software. Furthermore, steps can be taken toward standardization, and the specifier may be able to take on an increased...The semantic level deals with the meaning of the dialogue to the user. The user has a "point of view" or a " mental model" which provides a context for...information may not occur. As shown in Figure 3-4, the user’s mental model is termed the USER MODEL (Norman and Draper, 1986, p. 47). The programmer’s

  11. Collaborative Dialogue in Synchronous Computer-Mediated Communication and Face-to-Face Communication

    ERIC Educational Resources Information Center

    Zeng, Gang

    2017-01-01

    Previous research has documented that collaborative dialogue promotes L2 learning in both face-to-face (F2F) and synchronous computer-mediated communication (SCMC) modalities. However, relatively little research has explored modality effects on collaborative dialogue. Thus, motivated by sociocultual theory, this study examines how F2F compares…

  12. HCI∧2 framework: a software framework for multimodal human-computer interaction systems.

    PubMed

    Shen, Jie; Pantic, Maja

    2013-12-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).

  13. Human-computer interaction for alert warning and attention allocation systems of the multimodal watchstation

    NASA Astrophysics Data System (ADS)

    Obermayer, Richard W.; Nugent, William A.

    2000-11-01

    The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.

  14. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders.

    PubMed

    Tanaka, Hiroki; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system's effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime.

  15. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders

    PubMed Central

    Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system’s effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime. PMID:28796781

  16. Multimodal and ubiquitous computing systems: supporting independent-living older users.

    PubMed

    Perry, Mark; Dowdall, Alan; Lines, Lorna; Hone, Kate

    2004-09-01

    We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a resident's home with sensors--these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications.

  17. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. The Effects of Multimodality through Storytelling Using Various Movie Clips

    ERIC Educational Resources Information Center

    Kim, SoHee

    2016-01-01

    This study examines the salient multimodal approaches for communicative competence and learners' reactions through storytelling tasks with three different modes: a silent movie clip, a movie clip with only sound effects, and a movie clip with sound effects and dialogue. In order to measure different multimodal effects and to define better delivery…

  19. Multimodal 2D Brain Computer Interface.

    PubMed

    Almajidy, Rand K; Boudria, Yacine; Hofmann, Ulrich G; Besio, Walter; Mankodiya, Kunal

    2015-08-01

    In this work we used multimodal, non-invasive brain signal recording systems, namely Near Infrared Spectroscopy (NIRS), disc electrode electroencephalography (EEG) and tripolar concentric ring electrodes (TCRE) electroencephalography (tEEG). 7 healthy subjects participated in our experiments to control a 2-D Brain Computer Interface (BCI). Four motor imagery task were performed, imagery motion of the left hand, the right hand, both hands and both feet. The signal slope (SS) of the change in oxygenated hemoglobin concentration measured by NIRS was used for feature extraction while the power spectrum density (PSD) of both EEG and tEEG in the frequency band 8-30Hz was used for feature extraction. Linear Discriminant Analysis (LDA) was used to classify different combinations of the aforementioned features. The highest classification accuracy (85.2%) was achieved by using features from all the three brain signals recording modules. The improvement in classification accuracy was highly significant (p = 0.0033) when using the multimodal signals features as compared to pure EEG features.

  20. Modeling of dialogue regimes of distance robot control

    NASA Astrophysics Data System (ADS)

    Larkin, E. V.; Privalov, A. N.

    2017-02-01

    Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.

  1. Generating Multimodal References

    ERIC Educational Resources Information Center

    van der Sluis, Ielka; Krahmer, Emiel

    2007-01-01

    This article presents a new computational model for the generation of multimodal referring expressions (REs), based on observations in human communication. The algorithm is an extension of the graph-based algorithm proposed by Krahmer, van Erk, and Verleg (2003) and makes use of a so-called Flashlight Model for pointing. The Flashlight Model…

  2. Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals

    NASA Astrophysics Data System (ADS)

    Lisetti, Christine Lætitia; Nasoz, Fatma

    2004-12-01

    We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

  3. [Considering human peculiarities in attention to health care through dialogue and assistance].

    PubMed

    Pereira, Adriana Dall'Asta; de Freitas, Hilda Maria Barbosa; Ferreira, Carla Lizandra de Lima; Marchiori, Mara Regina Caino Teixeira; Souza, Martha Helena Teixeira; Backes, Dirce Stein

    2010-03-01

    The aim of this qualitative exploratory research is to understand how health workers relate to the main object of their work--the user--both subject and author of his/her life history. Eleven nursing practitioners from a Basic Health Unit participated in a semi-structured instrument, in March and April, 2008. The speeches revealed two converging themes: (1) Consideration of human peculiarities in attention to health care; and (2) dialogue and assistance as interactive possibilities. We found that the attention to health care is broadening the debates over valuing human peculiarities through dialogue and assistance as interactive possibilities.

  4. Dialogue-Based CALL: An Overview of Existing Research

    ERIC Educational Resources Information Center

    Bibauw, Serge; François, Thomas; Desmet, Piet

    2015-01-01

    Dialogue-based Computer-Assisted Language Learning (CALL) covers applications and systems allowing a learner to practice the target language in a meaning-focused conversational activity with an automated agent. We first present a common definition for dialogue-based CALL, based on three features: dialogue as the activity unit, computer as the…

  5. Prospectus on Multi-Modal Aspects of Human Factors in Transportation

    DOT National Transportation Integrated Search

    1991-02-01

    This prospectus identifies and discusses a series of human factors : issues which are critical to transportation safety and productivity, and : examines the potential benefits that can accrue from taking a multi-modal : approach to human factors rese...

  6. Learning multimodal dictionaries.

    PubMed

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  7. Humanising Coursebook Dialogues

    ERIC Educational Resources Information Center

    Timmis, Ivor

    2016-01-01

    In this article, I argue that the most important thing about coursebook dialogues is not whether they are "authentic" or "inauthentic" but whether they are "plausible" as human interaction and behaviour. Coursebook dialogues are often constructed as vehicles for various kinds of language work and even sometimes as…

  8. MushyPeek: A Framework for Online Investigation of Audiovisual Dialogue Phenomena

    ERIC Educational Resources Information Center

    Edlund, Jens; Beskow, Jonas

    2009-01-01

    Evaluation of methods and techniques for conversational and multimodal spoken dialogue systems is complex, as is gathering data for the modeling and tuning of such techniques. This article describes MushyPeek, an experiment framework that allows us to manipulate the audiovisual behavior of interlocutors in a setting similar to face-to-face…

  9. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  10. Education as Dialogue

    ERIC Educational Resources Information Center

    Kazepides, Tasos

    2012-01-01

    The purpose of this paper is to show that genuine dialogue is a refined human achievement and probably the most valid criterion on the basis of which we can evaluate educational or social policy and practice. The paper explores the prerequisites of dialogue in the language games, the common certainties, the rules of logic and the variety of common…

  11. Do You Think You Can? The Influence of Student Self-Efficacy on the Effectiveness of Tutorial Dialogue for Computer Science

    ERIC Educational Resources Information Center

    Wiggins, Joseph B.; Grafsgaard, Joseph F.; Boyer, Kristy Elizabeth; Wiebe, Eric N.; Lester, James C.

    2017-01-01

    In recent years, significant advances have been made in intelligent tutoring systems, and these advances hold great promise for adaptively supporting computer science (CS) learning. In particular, tutorial dialogue systems that engage students in natural language dialogue can create rich, adaptive interactions. A promising approach to increasing…

  12. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  13. Being Human Today: A Digital Storytelling Pedagogy for Transcontinental Border Crossing

    ERIC Educational Resources Information Center

    Stewart, Kristian; Gachago, Daniela

    2016-01-01

    This article reports the findings of a collaborative digital storytelling project titled "Being Human Today," a multimodal curricular initiative that was implemented simultaneously in both a South African and an American university classroom in 2015. By facilitating dialogue and the sharing of digital stories by means of a closed…

  14. Dialogue-Based Call: A Case Study on Teaching Pronouns

    ERIC Educational Resources Information Center

    Vlugter, P.; Knott, A.; McDonald, J.; Hall, C.

    2009-01-01

    We describe a computer assisted language learning (CALL) system that uses human-machine dialogue as its medium of interaction. The system was developed to help students learn the basics of the Maori language and was designed to accompany the introductory course in Maori running at the University of Otago. The student engages in a task-based…

  15. The Human Communication Research Centre dialogue database.

    PubMed

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  16. Generation and Evaluation of User Tailored Responses in Multimodal Dialogue

    ERIC Educational Resources Information Center

    Walker, M. A.; Whittaker, S. J.; Stent, A.; Maloor, P.; Moore, J.; Johnston, M.; Vasireddy, G.

    2004-01-01

    When people engage in conversation, they tailor their utterances to their conversational partners, whether these partners are other humans or computational systems. This tailoring, or adaptation to the partner takes place in all facets of human language use, and is based on a "mental model" or a "user model" of the conversational partner. Such…

  17. Towards an intelligent framework for multimodal affective data analysis.

    PubMed

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The Written Text and Human Dialogue: Pedagogical Responses to the Age of Hypertext Media.

    ERIC Educational Resources Information Center

    Ehrhart, Donna J.; Boyd, Charley

    In June 1995, New York's Genesee Community College hosted "The Written Text and Human Dialogue," a 4-week faculty development seminar for 30 professors in the humanities and technical disciplines across the United States. The seminar sought to explore the history of human communication and writing, to expand participants' knowledge of writing…

  19. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

  20. Research on Spoken Dialogue Systems

    NASA Technical Reports Server (NTRS)

    Aist, Gregory; Hieronymus, James; Dowding, John; Hockey, Beth Ann; Rayner, Manny; Chatzichrisafis, Nikos; Farrell, Kim; Renders, Jean-Michel

    2010-01-01

    Research in the field of spoken dialogue systems has been performed with the goal of making such systems more robust and easier to use in demanding situations. The term "spoken dialogue systems" signifies unified software systems containing speech-recognition, speech-synthesis, dialogue management, and ancillary components that enable human users to communicate, using natural spoken language or nearly natural prescribed spoken language, with other software systems that provide information and/or services.

  1. Transforming information for computer-aided instruction: using a Socratic Dialogue method to teach gross anatomy.

    PubMed

    Constantinou, P; Daane, S; Dev, P

    1994-01-01

    Traditional teaching of anatomy can be a difficult process of rote memorization. Computers allow information presentation to be much more dynamic, and interactive; the same information can be presented in multiple organizations. Using this idea, we have implemented a new pedagogy for computer-assisted instruction in The Anatomy Lesson, an interactive digital teacher which uses a "Socratic Dialogue" metaphor, as well as a textbook-like approach, to facilitate conceptual learning in anatomy.

  2. Stepwise Connectivity of the Modal Cortex Reveals the Multimodal Organization of the Human Brain

    PubMed Central

    Sepulcre, Jorge; Sabuncu, Mert R.; Yeo, Thomas B.; Liu, Hesheng; Johnson, Keith A.

    2012-01-01

    How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipito-temporal, prefrontal and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called Stepwise Functional Connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain. PMID:22855814

  3. Toward Multimodal Human-Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation.

    PubMed

    Gui, Kai; Liu, Honghai; Zhang, Dingguo

    2017-11-01

    Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.

  4. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    PubMed Central

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  5. Engaging in Critical Social Dialogue with Socially Diverse Undergraduate Teacher Candidates at a California State University

    ERIC Educational Resources Information Center

    Chavez-Reyes, Christina

    2012-01-01

    "Critical social dialogue" (CSD) is the process of problem posing, facilitating personal stories through silence and multimodal assignments, and positioning them for students to re-examine and re-evaluate their understanding of systems of social difference, the beginnings of a multicultural and social justice intellectual frame for…

  6. Transforming information for computer-aided instruction: using a Socratic Dialogue method to teach gross anatomy.

    PubMed Central

    Constantinou, P.; Daane, S.; Dev, P.

    1994-01-01

    Traditional teaching of anatomy can be a difficult process of rote memorization. Computers allow information presentation to be much more dynamic, and interactive; the same information can be presented in multiple organizations. Using this idea, we have implemented a new pedagogy for computer-assisted instruction in The Anatomy Lesson, an interactive digital teacher which uses a "Socratic Dialogue" metaphor, as well as a textbook-like approach, to facilitate conceptual learning in anatomy. Images Figure 1 PMID:7949881

  7. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    PubMed

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  8. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    PubMed Central

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. PMID:29593521

  9. Multimodal human communication--targeting facial expressions, speech content and prosody.

    PubMed

    Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2012-05-01

    Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  11. Automatic Dialogue Scoring for a Second Language Learning System

    ERIC Educational Resources Information Center

    Huang, Jin-Xia; Lee, Kyung-Soon; Kwon, Oh-Woog; Kim, Young-Kil

    2016-01-01

    This paper presents an automatic dialogue scoring approach for a Dialogue-Based Computer-Assisted Language Learning (DB-CALL) system, which helps users learn language via interactive conversations. The system produces overall feedback according to dialogue scoring to help the learner know which parts should be more focused on. The scoring measures…

  12. Computer-Based Multimodal Composing Activities, Self-Revision, and L2 Acquisition through Writing

    ERIC Educational Resources Information Center

    Dzekoe, Richmond

    2017-01-01

    This study investigated how 22 advanced-low proficiency ESL students used computer-based multimodal composing activities (CBMCAs) to facilitate self-revision and learn English through academic writing in the USA. The CBMCAs involved a combination of writing, listening, visual analysis, and speaking activities. The research was framed within an…

  13. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    NASA Astrophysics Data System (ADS)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  14. The Model of the "Space of Music Dialogue": Three Instances of Practice in Australian Homes and Classrooms

    ERIC Educational Resources Information Center

    Tomlinson, Michelle M.

    2018-01-01

    Multimodal analysis of classroom music interactions, using the model of the "Space of Music Dialogue" in video analysis of students' music improvisation, was useful to inform teachers of students' collaborative achievements in music invention. Research has affirmed that students' cognitive thinking skills were promoted by improvisation.…

  15. Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction

    DTIC Science & Technology

    1992-10-14

    project on the grant, we are analyzing sequences of speech act categories in dialogues between children. The 90 dialogues occur in the context of free ... play , a puzzle task, versus a 20-questions game. Our goal is to assess the extent to which various computational models can predict speech act category N

  16. Three dialogues concerning robots in elder care.

    PubMed

    Metzler, Theodore A; Barnes, Susan J

    2014-01-01

    The three dialogues in this contribution concern 21st century application of life-like robots in the care of older adults. They depict conversations set in the near future, involving a philosopher (Dr Phonius) and a nurse (Dr Myloss) who manages care at a large facility for assisted living. In their first dialogue, the speakers discover that their quite different attitudes towards human-robot interaction parallel fundamental differences separating their respective concepts of consciousness. The second dialogue similarly uncovers deeply contrasting notions of personhood that appear to be associated with respective communities of nursing and robotics. The additional key awareness that arises in their final dialogue links applications of life-like robots in the care of older adults with potential transformations in our understandings of ourselves - indeed, in our understandings of the nature of our own humanity. This series of dialogues, therefore, appears to address a topic in nursing philosophy that merits our careful attention. © 2013 John Wiley & Sons Ltd.

  17. Multi-mode horn antenna simulation

    NASA Technical Reports Server (NTRS)

    Dod, L. R.; Wolf, J. D.

    1980-01-01

    Radiation patterns were computed for a circular multimode horn antenna using waveguide electric field radiation expressions. The circular multimode horn was considered as a possible reflector feed antenna for the Large Antenna Multifrequency Microwave Radiometer (LAMMR). This horn antenna uses a summation of the TE sub 11 deg and TM sub 11 deg modes to generate far field primary radiation patterns with equal E and H plane beamwidths and low sidelobes. A computer program for the radiation field expressions using the summation of waveguide radiation modes is described. The sensitivity of the multimode horn antenna radiation patterns to phase variations between the two modes is given. Sample radiation pattern calculations for a reflector feed horn for LAMMR are shown. The multimode horn antenna provides a low noise feed suitable for radiometric applications.

  18. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2004-12-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  19. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  20. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  1. A Case Study of Diverse Multimodal Influences on Music Improvisation Using Visual Methodology

    ERIC Educational Resources Information Center

    Tomlinson, Michelle M.

    2016-01-01

    This case study employed multimodal methods and visual analysis to explore how a young multilingual student used music improvisation to form a speech rap. This student, recently arrived in Australia from Ethiopia, created piano music that was central to his music identity and that simultaneously, through dialogue with his mother, enhanced his…

  2. Using Tablet Computers in Preschool: How Does the Design of Applications Influence Participation, Interaction and Dialogues?

    ERIC Educational Resources Information Center

    Palmér, Hanna

    2015-01-01

    The results in this article explore whether and how the design of applications used on tablet computers influences the interaction and dialogues that occur between children and pedagogues, the participation of children in the activities and the mathematics that can be learned. While mathematics offered a lens to explore the use of tablet devices,…

  3. Multimodal imaging of the human knee down to the cellular level

    NASA Astrophysics Data System (ADS)

    Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.

    2017-06-01

    Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.

  4. Multimodal Learning Clubs

    ERIC Educational Resources Information Center

    Casey, Heather

    2012-01-01

    Multimodal learning clubs link principles of motivation and engagement with 21st century technological tools and texts to support content area learning. The author describes how a sixth grade health teacher and his class incorporated multimodal learning clubs into a unit of study on human body systems. The students worked collaboratively online…

  5. Multimode and single-mode fibers for data center and high-performance computing applications

    NASA Astrophysics Data System (ADS)

    Bickham, Scott R.

    2016-03-01

    Data center (DC) and high performance computing (HPC) applications have traditionally used a combination of copper, multimode fiber and single-mode fiber interconnects with relative percentages that depend on factors such as the line rate, reach and connectivity costs. The balance between these transmission media has increasingly shifted towards optical fiber due to the reach constraints of copper at data rates of 10 Gb/s and higher. The percentage of single-mode fiber deployed in the DC has also grown slightly since 2014, coinciding with the emergence of mega DCs with extended distance needs beyond 100 m. This trend will likely continue in the next few years as DCs expand their capacity from 100G to 400G, increase the physical size of their facilities and begin to utilize silicon-photonics transceiver technology. However there is a still a need for the low-cost and high-density connectivity, and this is sustaining the deployment of multimode fiber for links <= 100 m. In this paper, we discuss options for single-mode and multimode fibers in DCs and HPCs and introduce a reduced diameter multimode fiber concept which provides intra-and inter-rack connectivity as well as compatibility with silicon-photonic transceivers operating at 1310 nm. We also discuss the trade-offs between single-mode fiber attributes such as bend-insensitivity, attenuation and mode field diameter and their roles in capacity and connectivity in data centers.

  6. Intercultural Dialogue: Cultural Dialogues of Equals or Cultural Dialogues of Unequals?

    ERIC Educational Resources Information Center

    Igbino, John

    2011-01-01

    This article has two aims. The first aim of the article is to show some emerging problems and questions facing intercultural dialogue. This involves a critique of intercultural dialogue by situating it within emerging models of cultural change. The second aim of the article is to show alternative approaches to cultural dialogues. This involves the…

  7. Multimodal interaction for human-robot teams

    NASA Astrophysics Data System (ADS)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  8. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    PubMed

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  9. Multimodal Learning Analytics and Education Data Mining: Using Computational Technologies to Measure Complex Learning Tasks

    ERIC Educational Resources Information Center

    Blikstein, Paulo; Worsley, Marcelo

    2016-01-01

    New high-frequency multimodal data collection technologies and machine learning analysis techniques could offer new insights into learning, especially when students have the opportunity to generate unique, personalized artifacts, such as computer programs, robots, and solutions engineering challenges. To date most of the work on learning analytics…

  10. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  11. Eye Tracking Based Control System for Natural Human-Computer Interaction

    PubMed Central

    Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528

  12. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    PubMed

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  13. Human perceptual deficits as factors in computer interface test and evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less

  14. Human cortical–hippocampal dialogue in wake and slow-wave sleep

    PubMed Central

    Mitra, Anish; Hacker, Carl D.; Pahwa, Mrinal; Tagliazucchi, Enzo; Laufs, Helmut; Leuthardt, Eric C.; Raichle, Marcus E.

    2016-01-01

    Declarative memory consolidation is hypothesized to require a two-stage, reciprocal cortical–hippocampal dialogue. According to this model, higher frequency signals convey information from the cortex to hippocampus during wakefulness, but in the reverse direction during slow-wave sleep (SWS). Conversely, lower-frequency activity propagates from the information “receiver” to the “sender” to coordinate the timing of information transfer. Reversal of sender/receiver roles across wake and SWS implies that higher- and lower-frequency signaling should reverse direction between the cortex and hippocampus. However, direct evidence of such a reversal has been lacking in humans. Here, we use human resting-state fMRI and electrocorticography to demonstrate that δ-band activity and infraslow activity propagate in opposite directions between the hippocampus and cerebral cortex. Moreover, both δ activity and infraslow activity reverse propagation directions between the hippocampus and cerebral cortex across wake and SWS. These findings provide direct evidence for state-dependent reversals in human cortical–hippocampal communication. PMID:27791089

  15. Dialogue Systems and Dialogue Management

    DTIC Science & Technology

    2016-12-01

    dialogue management capability within DST Group’s Consensus project . UNCLASSIFIED UNCLASSIFIED Author Deeno Burgan National Security...3.1 Survey Process This research into dialogue management is part of a joint collaboration between DST Group and CSIRO. The project team comprised...

  16. Socratic Dialogue, the Humanities and the Art of the Question

    ERIC Educational Resources Information Center

    Mitchell, Sebastian

    2006-01-01

    Plato's depiction of Socrates' interrogations in his early dialogues provides an enduring example of the importance of asking questions as an educative method. This article considers the central educational elements of Socratic dialogue and the ways in which these were developed in the 20th century, particularly in "The Socratic Method"…

  17. Instruction dialogues: Teaching new skills to a robot

    NASA Technical Reports Server (NTRS)

    Crangle, Colleen; Suppes, P.

    1989-01-01

    Extended dialogues between a human user and a robot system are presented. The purpose of each dialogue is to teach the robot a new skill or to improve the performance of a skill it already has. The particular interest is in natural language dialogues but the illustrated techniques can be applied to any high level language. The primary purpose is to show how verbal instruction can be integrated with the robot's autonomous learning of a skill.

  18. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  19. Gestural and symbolic development among apes and humans: support for a multimodal theory of language evolution.

    PubMed

    Gillespie-Lynch, Kristen; Greenfield, Patricia M; Lyn, Heidi; Savage-Rumbaugh, Sue

    2014-01-01

    What are the implications of similarities and differences in the gestural and symbolic development of apes and humans?This focused review uses as a starting point our recent study that provided evidence that gesture supported the symbolic development of a chimpanzee, a bonobo, and a human child reared in language-enriched environments at comparable stages of communicative development. These three species constitute a complete clade, species possessing a common immediate ancestor. Communicative behaviors observed among all species in a clade are likely to have been present in the common ancestor. Similarities in the form and function of many gestures produced by the chimpanzee, bonobo, and human child suggest that shared non-verbal skills may underlie shared symbolic capacities. Indeed, an ontogenetic sequence from gesture to symbol was present across the clade but more pronounced in child than ape. Multimodal expressions of communicative intent (e.g., vocalization plus persistence or eye-contact) were normative for the child, but less common for the apes. These findings suggest that increasing multimodal expression of communicative intent may have supported the emergence of language among the ancestors of humans. Therefore, this focused review includes new studies, since our 2013 article, that support a multimodal theory of language evolution.

  20. Gestural and symbolic development among apes and humans: support for a multimodal theory of language evolution

    PubMed Central

    Gillespie-Lynch, Kristen; Greenfield, Patricia M.; Lyn, Heidi; Savage-Rumbaugh, Sue

    2014-01-01

    What are the implications of similarities and differences in the gestural and symbolic development of apes and humans?This focused review uses as a starting point our recent study that provided evidence that gesture supported the symbolic development of a chimpanzee, a bonobo, and a human child reared in language-enriched environments at comparable stages of communicative development. These three species constitute a complete clade, species possessing a common immediate ancestor. Communicative behaviors observed among all species in a clade are likely to have been present in the common ancestor. Similarities in the form and function of many gestures produced by the chimpanzee, bonobo, and human child suggest that shared non-verbal skills may underlie shared symbolic capacities. Indeed, an ontogenetic sequence from gesture to symbol was present across the clade but more pronounced in child than ape. Multimodal expressions of communicative intent (e.g., vocalization plus persistence or eye-contact) were normative for the child, but less common for the apes. These findings suggest that increasing multimodal expression of communicative intent may have supported the emergence of language among the ancestors of humans. Therefore, this focused review includes new studies, since our 2013 article, that support a multimodal theory of language evolution. PMID:25400607

  1. A multimodal image guiding system for Navigated Ultrasound Bronchoscopy (EBUS): A human feasibility study

    PubMed Central

    Hofstad, Erlend Fagertun; Amundsen, Tore; Langø, Thomas; Bakeng, Janne Beate Lervik; Leira, Håkon Olav

    2017-01-01

    Background Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) is the endoscopic method of choice for confirming lung cancer metastasis to mediastinal lymph nodes. Precision is crucial for correct staging and clinical decision-making. Navigation and multimodal imaging can potentially improve EBUS-TBNA efficiency. Aims To demonstrate the feasibility of a multimodal image guiding system using electromagnetic navigation for ultrasound bronchoschopy in humans. Methods Four patients referred for lung cancer diagnosis and staging with EBUS-TBNA were enrolled in the study. Target lymph nodes were predefined from the preoperative computed tomography (CT) images. A prototype convex probe ultrasound bronchoscope with an attached sensor for position tracking was used for EBUS-TBNA. Electromagnetic tracking of the ultrasound bronchoscope and ultrasound images allowed fusion of preoperative CT and intraoperative ultrasound in the navigation software. Navigated EBUS-TBNA was used to guide target lymph node localization and sampling. Navigation system accuracy was calculated, measured by the deviation between lymph node position in ultrasound and CT in three planes. Procedure time, diagnostic yield and adverse events were recorded. Results Preoperative CT and real-time ultrasound images were successfully fused and displayed in the navigation software during the procedures. Overall navigation accuracy (11 measurements) was 10.0 ± 3.8 mm, maximum 17.6 mm, minimum 4.5 mm. An adequate sample was obtained in 6/6 (100%) of targeted lymph nodes. No adverse events were registered. Conclusions Electromagnetic navigated EBUS-TBNA was feasible, safe and easy in this human pilot study. The clinical usefulness was clearly demonstrated. Fusion of real-time ultrasound, preoperative CT and electromagnetic navigational bronchoscopy provided a controlled guiding to level of target, intraoperative overview and procedure documentation. PMID:28182758

  2. A Chatbot for a Dialogue-Based Second Language Learning System

    ERIC Educational Resources Information Center

    Huang, Jin-Xia; Lee, Kyung-Soon; Kwon, Oh-Woog; Kim, Young-Kil

    2017-01-01

    This paper presents a chatbot for a Dialogue-Based Computer-Assisted second Language Learning (DB-CALL) system. A DB-CALL system normally leads dialogues by asking questions according to given scenarios. User utterances outside the scenarios are normally considered as semantically improper and simply rejected. In this paper, we assume that raising…

  3. An Examination of Collaborative Learning Assessment through Dialogue (CLAD) in Traditional and Hybrid Human Development Courses

    ERIC Educational Resources Information Center

    McCarthy, Wanda C.; Green, Peter J.; Fitch, Trey

    2010-01-01

    This investigation assessed the effectiveness of using Collaborative Learning Assessment through Dialogue (CLAD) (Fitch & Hulgin, 2007) with students in undergraduate human development courses. The key parts of CLAD are student collaboration, active learning, and altering the role of the instructor to a guide who enhances learning opportunities.…

  4. Automatic processing of spoken dialogue in the home hemodialysis domain.

    PubMed

    Lacson, Ronilda; Barzilay, Regina

    2005-01-01

    Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.

  5. Human/Computer Transaction Tasks: An Annotated Bibliography.

    DTIC Science & Technology

    1982-05-01

    Operations (Manpower, Personnel, and Training, OP-01). The subproject was directed toward resolving fundamental human engineering design issues in...1978 and results were used at the Navy Personnel Research and Development Center in research to resolve fundamental human engineering design issues for...Dialogue Monitor and analysis of Sthe data obtained are briefly discussed. Alden, D. G., Daniels, P. 3., and Kanarick, A. F. Keyboard design and

  6. Annotation of Tutorial Dialogue Goals for Natural Language Generation

    ERIC Educational Resources Information Center

    Kim, Jung Hee; Freedman, Reva; Glass, Michael; Evens, Martha W.

    2006-01-01

    We annotated transcripts of human tutoring dialogue for the purpose of constructing a dialogue-based intelligent tutoring system, CIRCSIM-Tutor. The tutors were professors of physiology who were also expert tutors. The students were 1st year medical students who communicated with the tutors using typed communication from separate rooms. The tutors…

  7. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    PubMed

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  8. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  9. Building dialogue on complex conservation issues in a conference setting.

    PubMed

    Rock, Jenny; Sparrow, Andrew; Wass, Rob; Moller, Henrik

    2014-10-01

    Dialogue about complex science and society issues is important for contemporary conservation agendas. Conferences provide an appropriate space for such dialogue, but despite its recognized worth, best practices for facilitating active dialogue are still being explored. Face-to-face (FTF) and computer-mediated communication (CMC) are two approaches to facilitating dialogue that have different strengths. We assessed the use of these approaches to create dialogue on cultural perspectives of conservation and biodiversity at a national ecology conference. In particular, we aimed to evaluate their potential to enhance dialogue through their integrated application. We used an interactive blog to generate CMC on participant-sourced issues and to prime subsequent discussion in an FTF conference workshop. The quantity and quality of both CMC and FTF discussion indicated that both approaches were effective in building dialogue. Prior to the conference the blog averaged 126 views per day, and 44 different authors contributed a total of 127 comments. Twenty-five participants subsequently participated in active FTF discussion during a 3-h workshop. Postconference surveys confirmed that CMC had developed participants' thinking and deepened FTF dialogue; 88% indicated specifically that CMC helped facilitate the FTF discussion. A further 83% of respondents concluded that preliminary blog discussion would be useful for facilitating dialogue at future conferences. © 2014 Society for Conservation Biology.

  10. Human difference in the genomic era: Facilitating a socially responsible dialogue

    PubMed Central

    2010-01-01

    Background The study of human genetic variation has been advanced by research such as genome-wide association studies, which aim to identify variants associated with common, complex diseases and traits. Significant strides have already been made in gleaning information on susceptibility, treatment, and prevention of a number of disorders. However, as genetic researchers continue to uncover underlying differences between individuals, there is growing concern that observed population-level differences will be inappropriately generalized as inherent to particular racial or ethnic groups and potentially perpetuate negative stereotypes. Discussion We caution that imprecision of language when conveying research conclusions, compounded by the potential distortion of findings by the media, can lead to the stigmatization of racial and ethnic groups. Summary It is essential that the scientific community and with those reporting and disseminating research findings continue to foster a socially responsible dialogue about genetic variation and human difference. PMID:20504336

  11. Investigating the Relationship between Dialogue Structure and Tutoring Effectiveness: A Hidden Markov Modeling Approach

    ERIC Educational Resources Information Center

    Boyer, Kristy Elizabeth; Phillips, Robert; Ingram, Amy; Ha, Eun Young; Wallis, Michael; Vouk, Mladen; Lester, James

    2011-01-01

    Identifying effective tutorial dialogue strategies is a key issue for intelligent tutoring systems research. Human-human tutoring offers a valuable model for identifying effective tutorial strategies, but extracting them is a challenge because of the richness of human dialogue. This article addresses that challenge through a machine learning…

  12. Creative Multimodal Learning Environments and Blended Interaction for Problem-Based Activity in HCI Education

    ERIC Educational Resources Information Center

    Ioannou, Andri; Vasiliou, Christina; Zaphiris, Panayiotis; Arh, Tanja; Klobucar, Tomaž; Pipan, Matija

    2015-01-01

    This exploratory case study aims to examine how students benefit from a multimodal learning environment while they engage in collaborative problem-based activity in a Human Computer Interaction (HCI) university course. For 12 weeks, 30 students, in groups of 5-7 each, participated in weekly face-to-face meetings and online interactions.…

  13. Radioactive Nanomaterials for Multimodality Imaging

    PubMed Central

    Chen, Daiqin; Dougherty, Casey A.; Yang, Dongzhi; Wu, Hongwei; Hong, Hao

    2016-01-01

    Nuclear imaging techniques, including primarily positron emission tomography (PET) and single-photon emission computed tomography (SPECT), can provide quantitative information for a biological event in vivo with ultra-high sensitivity, however, the comparatively low spatial resolution is their major limitation in clinical application. By convergence of nuclear imaging with other imaging modalities like computed tomography (CT), magnetic resonance imaging (MRI) and optical imaging, the hybrid imaging platforms can overcome the limitations from each individual imaging technique. Possessing versatile chemical linking ability and good cargo-loading capacity, radioactive nanomaterials can serve as ideal imaging contrast agents. In this review, we provide a brief overview about current state-of-the-art applications of radioactive nanomaterials in the circumstances of multimodality imaging. We present strategies for incorporation of radioisotope(s) into nanomaterials along with applications of radioactive nanomaterials in multimodal imaging. Advantages and limitations of radioactive nanomaterials for multimodal imaging applications are discussed. Finally, a future perspective of possible radioactive nanomaterial utilization is presented for improving diagnosis and patient management in a variety of diseases. PMID:27227167

  14. Using Virtual Technology to Promote Functional Communication in Aphasia: Preliminary Evidence From Interactive Dialogues With Human and Virtual Clinicians.

    PubMed

    Kalinyak-Fliszar, Michelene; Martin, Nadine; Keshner, Emily; Rudnicky, Alex; Shi, Justin; Teodoro, Gregory

    2015-11-01

    We investigated the feasibility of using a virtual clinician (VC) to promote functional communication abilities of persons with aphasia (PWAs). We aimed to determine whether the quantity and quality of verbal output in dialogues with a VC would be the same or greater than those with a human clinician (HC). Four PWAs practiced dialogues for 2 sessions each with a HC and VC. Dialogues from before and after practice were transcribed and analyzed for content. We compared measures taken before and after practice in the VC and HC conditions. Results were mixed. Participants either produced more verbal output with the VC or showed no difference on this measure between the VC and HC conditions. Participants also showed some improvement in postpractice narratives. Results provide support for the feasibility and applicability of virtual technology to real-life communication contexts to improve functional communication in PWAs.

  15. Linguistic Analysis of Natural Language Communication with Computers.

    ERIC Educational Resources Information Center

    Thompson, Bozena Henisz

    Interaction with computers in natural language requires a language that is flexible and suited to the task. This study of natural dialogue was undertaken to reveal those characteristics which can make computer English more natural. Experiments were made in three modes of communication: face-to-face, terminal-to-terminal, and human-to-computer,…

  16. Conversational Interfaces: A Domain-Independent Architecture for Task-Oriented Dialogues

    DTIC Science & Technology

    2002-12-12

    system ought to be able tofa ilitate the understanding of the intentions of the human operatorand be ause it should be able to ommuni ate the plans...instantiated from the re ipes withnatural language a straightforward task for the dialogue front-end tofa ilitate. Moreover, it is designed so that onstraints...htake advantage of the framework dis ussed in this paper in order tofa iliate more natural dialogues between the human operator and thedevi e. The

  17. Multimodal neural correlates of cognitive control in the Human Connectome Project.

    PubMed

    Lerman-Sinkoff, Dov B; Sui, Jing; Rachakonda, Srinivas; Kandala, Sridhar; Calhoun, Vince D; Barch, Deanna M

    2017-12-01

    Cognitive control is a construct that refers to the set of functions that enable decision-making and task performance through the representation of task states, goals, and rules. The neural correlates of cognitive control have been studied in humans using a wide variety of neuroimaging modalities, including structural MRI, resting-state fMRI, and task-based fMRI. The results from each of these modalities independently have implicated the involvement of a number of brain regions in cognitive control, including dorsal prefrontal cortex, and frontal parietal and cingulo-opercular brain networks. However, it is not clear how the results from a single modality relate to results in other modalities. Recent developments in multimodal image analysis methods provide an avenue for answering such questions and could yield more integrated models of the neural correlates of cognitive control. In this study, we used multiset canonical correlation analysis with joint independent component analysis (mCCA + jICA) to identify multimodal patterns of variation related to cognitive control. We used two independent cohorts of participants from the Human Connectome Project, each of which had data from four imaging modalities. We replicated the findings from the first cohort in the second cohort using both independent and predictive analyses. The independent analyses identified a component in each cohort that was highly similar to the other and significantly correlated with cognitive control performance. The replication by prediction analyses identified two independent components that were significantly correlated with cognitive control performance in the first cohort and significantly predictive of performance in the second cohort. These components identified positive relationships across the modalities in neural regions related to both dynamic and stable aspects of task control, including regions in both the frontal-parietal and cingulo-opercular networks, as well as regions

  18. Human Behavior Analysis by Means of Multimodal Context Mining

    PubMed Central

    Banos, Oresti; Villalonga, Claudia; Bang, Jaehun; Hur, Taeho; Kang, Donguk; Park, Sangbeom; Huynh-The, Thien; Le-Ba, Vui; Amin, Muhammad Bilal; Razzaq, Muhammad Asif; Khan, Wahajat Ali; Hong, Choong Seon; Lee, Sungyoung

    2016-01-01

    There is sufficient evidence proving the impact that negative lifestyle choices have on people’s health and wellness. Changing unhealthy behaviours requires raising people’s self-awareness and also providing healthcare experts with a thorough and continuous description of the user’s conduct. Several monitoring techniques have been proposed in the past to track users’ behaviour; however, these approaches are either subjective and prone to misreporting, such as questionnaires, or only focus on a specific component of context, such as activity counters. This work presents an innovative multimodal context mining framework to inspect and infer human behaviour in a more holistic fashion. The proposed approach extends beyond the state-of-the-art, since it not only explores a sole type of context, but also combines diverse levels of context in an integral manner. Namely, low-level contexts, including activities, emotions and locations, are identified from heterogeneous sensory data through machine learning techniques. Low-level contexts are combined using ontological mechanisms to derive a more abstract representation of the user’s context, here referred to as high-level context. An initial implementation of the proposed framework supporting real-time context identification is also presented. The developed system is evaluated for various realistic scenarios making use of a novel multimodal context open dataset and data on-the-go, demonstrating prominent context-aware capabilities at both low and high levels. PMID:27517928

  19. Human Behavior Analysis by Means of Multimodal Context Mining.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Bang, Jaehun; Hur, Taeho; Kang, Donguk; Park, Sangbeom; Huynh-The, Thien; Le-Ba, Vui; Amin, Muhammad Bilal; Razzaq, Muhammad Asif; Khan, Wahajat Ali; Hong, Choong Seon; Lee, Sungyoung

    2016-08-10

    There is sufficient evidence proving the impact that negative lifestyle choices have on people's health and wellness. Changing unhealthy behaviours requires raising people's self-awareness and also providing healthcare experts with a thorough and continuous description of the user's conduct. Several monitoring techniques have been proposed in the past to track users' behaviour; however, these approaches are either subjective and prone to misreporting, such as questionnaires, or only focus on a specific component of context, such as activity counters. This work presents an innovative multimodal context mining framework to inspect and infer human behaviour in a more holistic fashion. The proposed approach extends beyond the state-of-the-art, since it not only explores a sole type of context, but also combines diverse levels of context in an integral manner. Namely, low-level contexts, including activities, emotions and locations, are identified from heterogeneous sensory data through machine learning techniques. Low-level contexts are combined using ontological mechanisms to derive a more abstract representation of the user's context, here referred to as high-level context. An initial implementation of the proposed framework supporting real-time context identification is also presented. The developed system is evaluated for various realistic scenarios making use of a novel multimodal context open dataset and data on-the-go, demonstrating prominent context-aware capabilities at both low and high levels.

  20. Human Factors Considerations in System Design

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M. (Editor); Vanbalen, P. M. (Editor); Moe, K. L. (Editor)

    1983-01-01

    Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments.

  1. Influence of Pause Duration and Nod Response Timing in Dialogue between Human and Communication Robot

    NASA Astrophysics Data System (ADS)

    Takasugi, Shoji; Yoshida, Shohei; Okitsu, Kengo; Yokoyama, Masanori; Yamamoto, Tomohito; Miyake, Yoshihiro

    The purpose of this study is to clarify the influence from timing of utterance and body motion in dialogue between human and robot. We controlled pause duration and nod response timing in robot-side, and analyzed impression of communication in human-side by using Scheffe's Paired Comparison method. The results revealed that the impression of communication significantly modified by changing the pause duration and nod response timing. And, timing pattern of the impression altered diversely in elderly people than in younger, indicating that elderly generation uses various timing control mechanisms. From these results, it was suggested that timing control and impression of communication are mutually influenced, and this mechanism is thought to be useful to realize human-robot communication system for elderly generation.

  2. Low-Loss Photonic Reservoir Computing with Multimode Photonic Integrated Circuits.

    PubMed

    Katumba, Andrew; Heyvaert, Jelle; Schneider, Bendix; Uvin, Sarah; Dambre, Joni; Bienstman, Peter

    2018-02-08

    We present a numerical study of a passive integrated photonics reservoir computing platform based on multimodal Y-junctions. We propose a novel design of this junction where the level of adiabaticity is carefully tailored to capture the radiation loss in higher-order modes, while at the same time providing additional mode mixing that increases the richness of the reservoir dynamics. With this design, we report an overall average combination efficiency of 61% compared to the standard 50% for the single-mode case. We demonstrate that with this design, much more power is able to reach the distant nodes of the reservoir, leading to increased scaling prospects. We use the example of a header recognition task to confirm that such a reservoir can be used for bit-level processing tasks. The design itself is CMOS-compatible and can be fabricated through the known standard fabrication procedures.

  3. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  4. Applications of Elpasolites as a Multimode Radiation Sensor

    NASA Astrophysics Data System (ADS)

    Guckes, Amber

    This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode

  5. Faith Dialogue as a Pedagogy for a Post Secular Religious Education

    ERIC Educational Resources Information Center

    Castelli, Mike

    2012-01-01

    Inter-faith or inter-religious dialogue takes place for a range of reasons and comes in many guises, from the reconciliatory encounter to ease rivalry, to an engagement with the other in an exploration of the meaning and purpose of the human condition. This article examines the process of dialogue in a religious education context and proposes a…

  6. Multimodal Research: Addressing the Complexity of Multimodal Environments and the Challenges for CALL

    ERIC Educational Resources Information Center

    Tan, Sabine; O'Halloran, Kay L.; Wignell, Peter

    2016-01-01

    Multimodality, the study of the interaction of language with other semiotic resources such as images and sound resources, has significant implications for computer assisted language learning (CALL) with regards to understanding the impact of digital environments on language teaching and learning. In this paper, we explore recent manifestations of…

  7. Models of Persuasion Dialogue

    NASA Astrophysics Data System (ADS)

    Prakken, Henry

    This chapter1 reviews formal dialogue systems for persuasion. In persuasion dialogues two or more participants try to resolve a conflict of opinion, each trying to persuade the other participants to adopt their point of view. Dialogue systems for persuasion regulate how such dialogues can be conducted and what their outcome is. Good dialogue systems ensure that conflicts of view can be resolved in a fair and effective way [6]. The term ‘persuasion dialogue’ was coined by Walton [13] as part of his influential classification of dialogues into six types according to their goal. While persuasion aims to resolve a difference of opinion, negotiation tries to resolve a conflict of interest by reaching a deal, information seeking aims at transferring information, deliberationdeliberation wants to reach a decision on a course of action, inquiry is aimed at “growth of knowledge and agreement” and quarrel is the verbal substitute of a fight. This classification leaves room for shifts of dialogues of one type to another. In particular, other types of dialogues can shift to persuasion when a conflict of opinion arises. For example, in information-seeking a conflict of opinion could arise on the credibility of a source of information, in deliberation the participants may disagree about likely effects of plans or actions and in negotiation they may disagree about the reasons why a proposal is in one’s interest.

  8. Information density converges in dialogue: Towards an information-theoretic model.

    PubMed

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Can a Rabbit Be a Scientist? Stimulating Philosophical Dialogue in Science Classes

    ERIC Educational Resources Information Center

    Dunlop, Lynda; de Schrijver, Jelle

    2018-01-01

    Philosophical dialogue requires an approach to teaching and learning in science that is focused on problem posing and provides space for meaning making, finding new ways of thinking and understanding and for linking science with broader human experiences. This article explores the role that philosophical dialogue can play in science lessons and…

  10. The Intersection of Multimodality and Critical Perspective: Multimodality as Subversion

    ERIC Educational Resources Information Center

    Huang, Shin-ying

    2015-01-01

    This study explores the relevance of multimodality to critical media literacy. It is based on the understanding that communication is intrinsically multimodal and multimodal communication is inherently social and ideological. By analysing two English-language learners' multimodal ensembles, the study reports on how multimodality contributes to a…

  11. A Multimodal Search Engine for Medical Imaging Studies.

    PubMed

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  12. Creating Dialogue by Storytelling

    ERIC Educational Resources Information Center

    Passila, Anne; Oikarinen, Tuija; Kallio, Anne

    2013-01-01

    Purpose: The objective of this paper is to develop practice and theory from Augusto Boal's dialogue technique (Image Theatre) for organisational use. The paper aims to examine how the members in an organisation create dialogue together by using a dramaturgical storytelling framework where the dialogue emerges from storytelling facilitated by…

  13. AWACS Dialogue Training System (DTS) Evaluation

    DTIC Science & Technology

    2007-08-01

    Dialogue would also be welcome. Human instructors would also have the benefit of providing experienced advice and feedback. Feedback, or the lack of it...converse/start/commit a mission or to KIO when necessary. There was no response to KIO calls when fuel state was at Bingo and to KIO calls for

  14. A multimodal dataset for authoring and editing multimedia content: The MAMEM project.

    PubMed

    Nikolopoulos, Spiros; Petrantonakis, Panagiotis C; Georgiadis, Kostas; Kalaganis, Fotis; Liaros, Georgios; Lazarou, Ioulietta; Adam, Katerina; Papazoglou-Chalikias, Anastasios; Chatzilari, Elisavet; Oikonomou, Vangelis P; Kumar, Chandan; Menges, Raphael; Staab, Steffen; Müller, Daniel; Sengupta, Korok; Bostantjopoulou, Sevasti; Katsarou, Zoe; Zeilig, Gabi; Plotnik, Meir; Gotlieb, Amihai; Kizoni, Racheli; Fountoukidou, Sofia; Ham, Jaap; Athanasiou, Dimitrios; Mariakaki, Agnes; Comanducci, Dario; Sabatini, Edoardo; Nistico, Walter; Plank, Markus; Kompatsiaris, Ioannis

    2017-12-01

    We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  15. Yes we can! The Raffles Dialogue on Human Wellbeing and Security.

    PubMed

    Pang, Tikki; Chong, Yap Seng; Fong, Hildy; Harris, Eva; Horton, Richard; Lee, Kelley; Liu, Eugene; Mahbubani, Kishore; Pangestu, Mari; Yeoh, Khay Guan; Wong, John Eu-Li

    2015-08-01

    The future of human wellbeing and security depends on our ability to deal with the multiple effects of globalisation and on adoption of a new paradigm and philosophy for living and for health that emphasises people's wellbeing and social justice. Such was the topic of the inaugural Raffles Dialogue on Human Wellbeing and Security held in Singapore on Feb 2-3, 2015. Participants agreed that, to achieve these goals, four conditions must be met. First, equity must be integral to the implementation of technology. Second, there is an urgent need for innovations within our global institutions to make them "fit for purpose" in a rapidly changing world. Third, we must find the right balance between the roles of government and markets so that all those in need can access affordable medicine and health care. Finally, we must realise that we live in a small and interdependent "global village", where Asian countries need to assume greater leadership of our global village councils. This is the great imperative of our times. Copyright © 2015 Pang et al. Open Access article distributed under the terms of CC BY-NC-ND. Published by Elsevier Ltd.. All rights reserved.

  16. Establishing Goals and Maintaining Coherence in Multiparty Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Groen, Martin; Noyes, Jan

    2013-01-01

    Communicating via text-only computer-mediated communication (CMC) channels is associated with a number of issues that would impair users in achieving dialogue coherence and goals. It has been suggested that humans have devised novel adaptive strategies to deal with those issues. However, it could be that humans rely on "classic"…

  17. Revisiting Dialogues and Monologues

    ERIC Educational Resources Information Center

    Kvernbekk, Tone

    2012-01-01

    In educational discourse dialogue tends to be viewed as being (morally) superior to monologue. When we look at them as basic forms of communication, we find that dialogue is a two-way, one-to-one form and monologue is a one-way, one-to-many form. In this paper I revisit the alleged (moral) superiority of dialogue. First, I problematize certain…

  18. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  19. Observing tutorial dialogues collaboratively: insights about human tutoring effectiveness from vicarious learning.

    PubMed

    Chi, Michelene T H; Roy, Marguerite; Hausmann, Robert G M

    2008-03-01

    The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge. 2008 Cognitive Science Society, Inc.

  20. Dialogue as Data in Learning Analytics for Productive Educational Dialogue

    ERIC Educational Resources Information Center

    Knight, Simon; Littleton, Karen

    2015-01-01

    This paper provides a novel, conceptually driven stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on- and offline sites of learning. In prior research, preliminary steps have been taken to detect occurrences of such dialogue using automated analysis techniques. Such advances…

  1. AdaRTE: adaptable dialogue architecture and runtime engine. A new architecture for health-care dialogue systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2007-01-01

    Spoken dialogue systems have been increasingly employed to provide ubiquitous automated access via telephone to information and services for the non-Internet-connected public. In the health care context, dialogue systems have been successfully applied. Nevertheless, speech-based technology is not easy to implement because it requires a considerable development investment. The advent of VoiceXML for voice applications contributed to reduce the proliferation of incompatible dialogue interpreters, but introduced new complexity. As a response to these issues, we designed an architecture for dialogue representation and interpretation, AdaRTE, which allows developers to layout dialogue interactions through a high level formalism that offers both declarative and procedural features. AdaRTE aim is to provide a ground for deploying complex and adaptable dialogues whilst allows the experimentation and incremental adoption of innovative speech technologies. It provides the dynamic behavior of Augmented Transition Networks and enables the generation of different backends formats such as VoiceXML. It is especially targeted to the health care context, where a framework for easy dialogue deployment could reduce the barrier for a more widespread adoption of dialogue systems.

  2. A Procedure for Analyzing Classroom Dialogue.

    ERIC Educational Resources Information Center

    Clarke, John A.

    Classroom dialogue is an important influence on students' learning, making the structure and content of dialogue important research variables. An analysis of two sample classroom dialogues using the Thematic and Structural Analysis (TSA) Technique shows a positive correlation between the quality of dialogue structure and the level of student…

  3. Multi-mode of Four and Six Wave Parametric Amplified Process

    NASA Astrophysics Data System (ADS)

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-01

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  4. Multi-mode of Four and Six Wave Parametric Amplified Process.

    PubMed

    Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng

    2017-03-03

    Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.

  5. Disruption, Dialogue, and Swerve: Reflective Structured Dialogue in Religious Studies Classrooms

    ERIC Educational Resources Information Center

    DeTemple, Jill; Sarrouf, John

    2017-01-01

    This article focuses on Reflective Structured Dialogue as a set of practices developed in the context of conflict resolution that are well suited to handling quotidian uneasiness and extraordinary moments of disruption in religious studies classrooms. After introducing Reflective Structured Dialogue's history, goals, and general practices, the…

  6. Multimodality Inferring of Human Cognitive States Based on Integration of Neuro-Fuzzy Network and Information Fusion Techniques

    NASA Astrophysics Data System (ADS)

    Yang, G.; Lin, Y.; Bhattacharya, P.

    2007-12-01

    To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.

  7. Multimodal Interfaces: Literature Review of Ecological Interface Design, Multimodal Perception and Attention, and Intelligent Adaptive Multimodal Interfaces

    DTIC Science & Technology

    2010-05-01

    Multimodal Interfaces Literature Review of Ecological Interface Design , Multimodal Perception and Attention, and Intelligent... Design , Multimodal Perception and Attention, and Intelligent Adaptive Multimodal Interfaces Wayne Giang, Sathya Santhakumaran, Ehsan Masnavi, Doug...Advanced Interface Design Laboratory, E2-1303N 200 University Avenue West Waterloo, Ontario Canada N2L 3G1 Contract Project Manager: Dr. Catherine

  8. Challenges in Transcribing Multimodal Data: A Case Study

    ERIC Educational Resources Information Center

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  9. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  10. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  11. Multimodal Neuroelectric Interface Development

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Totah, Joseph (Technical Monitor)

    2001-01-01

    This project aims to improve performance of NASA missions by developing multimodal neuroelectric technologies for augmented human-system interaction. Neuroelectric technologies will add completely new modes of interaction that operate in parallel with keyboards, speech, or other manual controls, thereby increasing the bandwidth of human-system interaction. We recently demonstrated the feasibility of real-time electromyographic (EMG) pattern recognition for a direct neuroelectric human-computer interface. We recorded EMG signals from an elastic sleeve with dry electrodes, while a human subject performed a range of discrete gestures. A machine-teaming algorithm was trained to recognize the EMG patterns associated with the gestures and map them to control signals. Successful applications now include piloting two Class 4 aircraft simulations (F-15 and 757) and entering data with a "virtual" numeric keyboard. Current research focuses on on-line adaptation of EMG sensing and processing and recognition of continuous gestures. We are also extending this on-line pattern recognition methodology to electroencephalographic (EEG) signals. This will allow us to bypass muscle activity and draw control signals directly from the human brain. Our system can reliably detect P-rhythm (a periodic EEG signal from motor cortex in the 10 Hz range) with a lightweight headset containing saline-soaked sponge electrodes. The data show that EEG p-rhythm can be modulated by real and imaginary motions. Current research focuses on using biofeedback to train of human subjects to modulate EEG rhythms on demand, and to examine interactions of EEG-based control with EMG-based and manual control. Viewgraphs on these neuroelectric technologies are also included.

  12. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  13. Empowering Dialogues in Humanistic Education

    ERIC Educational Resources Information Center

    Aloni, Nimrod

    2013-01-01

    In this article I propose a conception of empowering educational dialogue within the framework of humanistic education. It is based on the notions of Humanistic Education and Empowerment, and draws on a large and diverse repertoire of dialogues--from the classical Socratic, Confucian and Talmudic dialogues, to the modern ones associated with the…

  14. Learning to Internalize Action Dialogue

    ERIC Educational Resources Information Center

    Cotter, Teresa Ellen

    2011-01-01

    The purpose of this case study was to explore how participants of a communications workshop, "Action Dialogue," perceived their ability to engage in dialogue was improved and enhanced. The study was based on the following assumptions: (1) dialogue skills can be learned and people are able to learn these skills; (2) context and emotion influence…

  15. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  16. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  17. Reflection Effects in Multimode Fiber Systems Utilizing Laser Transmitters

    NASA Technical Reports Server (NTRS)

    Bates, Harry E.

    1991-01-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  18. Reflection effects in multimode fiber systems utilizing laser transmitters

    NASA Astrophysics Data System (ADS)

    Bates, Harry E.

    1991-11-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  19. Working Papers in Dialogue Modeling, Volume 2.

    ERIC Educational Resources Information Center

    Mann, William C.; And Others

    The technical working papers that comprise the two volumes of this document are related to the problem of creating a valid process model of human communication in dialogue. In Volume 2, the first paper concerns study methodology, and raises such issues as the choice between system-building and process-building, and the advantages of studying cases…

  20. Conducting Intelligent Business Dialogue.

    ERIC Educational Resources Information Center

    Hulbert, Jack E.

    1980-01-01

    Indicates that speaking skills (especially dialogue) are not adequately taught in management education. Describes effective dialogue as: defining the problem, gathering facts, interpreting the evidence, considering alternatives, and reaching decisions. Discusses various aspects of agreement and disagreement. (TJ)

  1. Multimodal user interfaces to improve social integration of elderly and mobility impaired.

    PubMed

    Dias, Miguel Sales; Pires, Carlos Galinho; Pinto, Fernando Miguel; Teixeira, Vítor Duarte; Freitas, João

    2012-01-01

    Technologies for Human-Computer Interaction (HCI) and Communication have evolved tremendously over the past decades. However, citizens such as mobility impaired or elderly or others, still face many difficulties interacting with communication services, either due to HCI issues or intrinsic design problems with the services. In this paper we start by presenting the results of two user studies, the first one conducted with a group of mobility impaired users, comprising paraplegic and quadriplegic individuals; and the second one with elderly. The study participants carried out a set of tasks with a multimodal (speech, touch, gesture, keyboard and mouse) and multi-platform (mobile, desktop) system, offering an integrated access to communication and entertainment services, such as email, agenda, conferencing, instant messaging and social media, referred to as LHC - Living Home Center. The system was designed to take into account the requirements captured from these users, with the objective of evaluating if the adoption of multimodal interfaces for audio-visual communication and social media services, could improve the interaction with such services. Our study revealed that a multimodal prototype system, offering natural interaction modalities, especially supporting speech and touch, can in fact improve access to the presented services, contributing to the reduction of social isolation of mobility impaired, as well as elderly, and improving their digital inclusion.

  2. Receptor-driven, multimodal mapping of the human amygdala.

    PubMed

    Kedo, Olga; Zilles, Karl; Palomero-Gallagher, Nicola; Schleicher, Axel; Mohlberg, Hartmut; Bludau, Sebastian; Amunts, Katrin

    2018-05-01

    The human amygdala consists of subdivisions contributing to various functions. However, principles of structural organization at the cellular and molecular level are not well understood. Thus, we re-analyzed the cytoarchitecture of the amygdala and generated cytoarchitectonic probabilistic maps of ten subdivisions in stereotaxic space based on novel workflows and mapping tools. This parcellation was then used as a basis for analyzing the receptor expression for 15 receptor types. Receptor fingerprints, i.e., the characteristic balance between densities of all receptor types, were generated in each subdivision to comprehensively visualize differences and similarities in receptor architecture between the subdivisions. Fingerprints of the central and medial nuclei and the anterior amygdaloid area were highly similar. Fingerprints of the lateral, basolateral and basomedial nuclei were also similar to each other, while those of the remaining nuclei were distinct in shape. Similarities were further investigated by a hierarchical cluster analysis: a two-cluster solution subdivided the phylogenetically older part (central, medial nuclei, anterior amygdaloid area) from the remaining parts of the amygdala. A more fine-grained three-cluster solution replicated our previous parcellation including a laterobasal, superficial and centromedial group. Furthermore, it helped to better characterize the paralaminar nucleus with a molecular organization in-between the laterobasal and the superficial group. The multimodal cyto- and receptor-architectonic analysis of the human amygdala provides new insights into its microstructural organization, intersubject variability, localization in stereotaxic space and principles of receptor-based neurochemical differences.

  3. Multimodal visualization interface for data management, self-learning and data presentation.

    PubMed

    Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M

    2006-10-01

    A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.

  4. Multimodal Word Meaning Induction From Minimal Exposure to Natural Text.

    PubMed

    Lazaridou, Angeliki; Marelli, Marco; Baroni, Marco

    2017-04-01

    By the time they reach early adulthood, English speakers are familiar with the meaning of thousands of words. In the last decades, computational simulations known as distributional semantic models (DSMs) have demonstrated that it is possible to induce word meaning representations solely from word co-occurrence statistics extracted from a large amount of text. However, while these models learn in batch mode from large corpora, human word learning proceeds incrementally after minimal exposure to new words. In this study, we run a set of experiments investigating whether minimal distributional evidence from very short passages suffices to trigger successful word learning in subjects, testing their linguistic and visual intuitions about the concepts associated with new words. After confirming that subjects are indeed very efficient distributional learners even from small amounts of evidence, we test a DSM on the same multimodal task, finding that it behaves in a remarkable human-like way. We conclude that DSMs provide a convincing computational account of word learning even at the early stages in which a word is first encountered, and the way they build meaning representations can offer new insights into human language acquisition. Copyright © 2017 Cognitive Science Society, Inc.

  5. A Framework and Toolkit for the Construction of Multimodal Learning Interfaces

    DTIC Science & Technology

    1998-04-29

    human communication modalities in the context of a broad class of applications, specifically those that support state manipulation via parameterized actions. The multimodal semantic model is also the basis for a flexible, domain independent, incrementally trainable multimodal interpretation algorithm based on a connectionist network. The second major contribution is an application framework consisting of reusable components and a modular, distributed system architecture. Multimodal application developers can assemble the components in the framework into a new application,

  6. Human-computer interaction in multitask situations

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1977-01-01

    Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.

  7. Plan Recognition and Discourse Analysis: An Integrated Approach for Understanding Dialogues.

    DTIC Science & Technology

    1985-01-01

    S~ 11 The data analysis also indicates what kinds of knowledge an intelligent computer system will need to understand such dialogues. As Grosz [371...Abbreviations: AAAI: Proceedings of the National Conference on Artifcial Intelligence ACL: Proceedings of the Annual Meeting of the Association for Computational...for Default Reasoning, Artifcial Intelligence 13. (1980). 81-132. 79. E. D, Sacerdod. Planning in a Hierarchy of Abstraction Spaces. Artificial

  8. Comparing Human-Human to Human-Computer Tutorial Dialogue

    DTIC Science & Technology

    2010-01-01

    acknowledged what their tutor said and participated in rapport building with chit-chat. This seems to be driven by a need to be polite and courteous to the...An experiment on public speaking anxiety in response to three different types of virtual audiences. Presence: Teleoperators and Virtual

  9. Analyzing Multimodal Interaction within a Classroom Setting

    ERIC Educational Resources Information Center

    Moura, Heloisa

    2006-01-01

    Human interactions are multimodal in nature. From simple to complex forms of transferal of information, human beings draw on a multiplicity of communicative modes, such as intonation and gaze, to make sense of everyday experiences. Likewise, the learning process, either within traditional classrooms or Virtual Learning Environments, is shaped by…

  10. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  11. Combined multi-modal photoacoustic tomography, optical coherence tomography (OCT) and OCT angiography system with an articulated probe for in vivo human skin structure and vasculature imaging

    PubMed Central

    Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang

    2016-01-01

    Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106

  12. Increasing trend of wearables and multimodal interface for human activity monitoring: A review.

    PubMed

    Kumari, Preeti; Mathew, Lini; Syal, Poonam

    2017-04-15

    Activity recognition technology is one of the most important technologies for life-logging and for the care of elderly persons. Elderly people prefer to live in their own houses, within their own locality. If, they are capable to do so, several benefits can follow in terms of society and economy. However, living alone may have high risks. Wearable sensors have been developed to overcome these risks and these sensors are supposed to be ready for medical uses. It can help in monitoring the wellness of elderly persons living alone by unobtrusively monitoring their daily activities. The study aims to review the increasing trends of wearable devices and need of multimodal recognition for continuous or discontinuous monitoring of human activity, biological signals such as Electroencephalogram (EEG), Electrooculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG) and parameters along with other symptoms. This can provide necessary assistance in times of ominous need, which is crucial for the advancement of disease-diagnosis and treatment. Shared control architecture with multimodal interface can be used for application in more complex environment where more number of commands is to be used to control with better results in terms of controlling. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  14. Improving the Efficiency of Dialogue in Tutoring

    ERIC Educational Resources Information Center

    Kopp, Kristopher J.; Britt, M. Anne; Millis, Keith; Graesser, Arthur C.

    2012-01-01

    The current studies investigated the efficient use of dialogue in intelligent tutoring systems that use natural language interaction. Such dialogues can be relatively time-consuming. This work addresses the question of how much dialogue is needed to produce significant learning gains. In Experiment 1, a full dialogue condition and a read-only…

  15. Object recognition through a multi-mode fiber

    NASA Astrophysics Data System (ADS)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-04-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  16. Two-photon quantum walk in a multimode fiber

    PubMed Central

    Defienne, Hugo; Barbieri, Marco; Walmsley, Ian A.; Smith, Brian J.; Gigan, Sylvain

    2016-01-01

    Multiphoton propagation in connected structures—a quantum walk—offers the potential of simulating complex physical systems and provides a route to universal quantum computation. Increasing the complexity of quantum photonic networks where the walk occurs is essential for many applications. We implement a quantum walk of indistinguishable photon pairs in a multimode fiber supporting 380 modes. Using wavefront shaping, we control the propagation of the two-photon state through the fiber in which all modes are coupled. Excitation of arbitrary output modes of the system is realized by controlling classical and quantum interferences. This report demonstrates a highly multimode platform for multiphoton interference experiments and provides a powerful method to program a general high-dimensional multiport optical circuit. This work paves the way for the next generation of photonic devices for quantum simulation, computing, and communication. PMID:27152325

  17. Connecting multimodality in human communication

    PubMed Central

    Regenbogen, Christina; Habel, Ute; Kellermann, Thilo

    2013-01-01

    DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy. PMID:24265613

  18. Connecting multimodality in human communication.

    PubMed

    Regenbogen, Christina; Habel, Ute; Kellermann, Thilo

    2013-01-01

    DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy.

  19. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    PubMed Central

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  20. Multimodal instrument for high-sensitivity autofluorescence and spectral optical coherence tomography of the human eye fundus

    PubMed Central

    Komar, Katarzyna; Stremplewski, Patrycjusz; Motoczyńska, Marta; Szkulmowski, Maciej; Wojtkowski, Maciej

    2013-01-01

    In this paper we present a multimodal device for imaging fundus of human eye in vivo which combines functionality of autofluorescence by confocal SLO with Fourier domain OCT. Native fluorescence of human fundus was excited by modulated laser beam (λ = 473 nm, 20 MHz) and lock-in detection was applied resulting in improving sensitivity. The setup allows for acquisition of high resolution OCT and high contrast AF images using fluorescence excitation power of 50-65 μW without averaging consecutive images. Successful functioning of constructed device have been demonstrated for 8 healthy volunteers of different age ranging from 24 to 83 years old. PMID:24298426

  1. Things of the Mind. Dialogues with J. Krishnamurti.

    ERIC Educational Resources Information Center

    Khare, Brij B.

    This book is about the human mind which is conditioned through education, formal or informal; it utilizes the philosophy of a world sage in order to understand the problems of contemporary society. "Things of the Mind" consists of four Socratic dialogues of which the main topics are: the meaning of an education that requires young people…

  2. Multimodal pain stimulation of the gastrointestinal tract

    PubMed Central

    Drewes, Asbjørn Mohr; Gregersen, Hans

    2006-01-01

    Understanding and characterization of pain and other sensory symptoms are among the most important issues in the diagnosis and assessment of patient with gastrointestinal disorders. Methods to evoke and assess experimental pain have recently developed into a new area with the possibility for multimodal stimulation (e.g., electrical, mechanical, thermal and chemical stimulation) of different nerves and pain pathways in the human gut. Such methods mimic to a high degree the pain experienced in the clinic. Multimodal pain methods have increased our basic understanding of different peripheral receptors in the gut in health and disease. Together with advanced muscle analysis, the methods have increased our understanding of receptors sensitive to mechanical, chemical and temperature stimuli in diseases, such as systemic sclerosis and diabetes. The methods can also be used to unravel central pain mechanisms, such as those involved in allodynia, hyperalgesia and referred pain. Abnormalities in central pain mechanisms are often seen in patients with chronic gut pain and hence methods relying on multimodal pain stimulation may help to understand the symptoms in these patients. Sex differences have been observed in several diseases of the gut, and differences in central pain processing between males and females have been hypothesized using multimodal pain stimulations. Finally, multimodal methods have recently been used to gain more insight into the effect of drugs against pain in the GI tract. Hence, the multimodal methods undoubtedly represents a major step forward in the future characterization and treatment of patients with various diseases of the gut. PMID:16688791

  3. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.

    PubMed

    Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun

    2017-11-01

    Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    NASA Technical Reports Server (NTRS)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  5. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    NASA Astrophysics Data System (ADS)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  6. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  7. Readings and Experiences of Multimodality

    ERIC Educational Resources Information Center

    Leander, Kevin M.; Aziz, Seemi; Botzakis, Stergios; Ehret, Christian; Landry, David; Rowsell, Jennifer

    2017-01-01

    Our understanding of reading--including reading multimodal texts--is always constrained or opened up by what we consider to be a text, what aspects of a reader's embodied activity we focus on, and how we draw a boundary around a reading event. This article brings together five literacy researchers who respond to a human-scale graphic novel,…

  8. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less

  9. Using Online Dialogues to Connect Local Leaders and Climate Experts: Methods, Feedback and Lessons Learned from the Resilience Dialogues

    NASA Astrophysics Data System (ADS)

    Goodwin, M.; Pandya, R.; Weaver, C. P.; Zerbonne, S.; Bennett, N.; Spangler, B.

    2017-12-01

    Inclusive, multi-stakeholder dialogue, participatory planning and actionable science are necessary for just and effective climate resilience outcomes. How can we support that in practice? The Resilience Dialogues launched a public Beta in 2016-2017 to allow scientists and resilience practitioners to engage with local leaders from 10 communities around the US through a series of facilitated, online dialogues. We developed two, one-week dialogues for each community: one to consider ways to respond to observed and anticipated climate impacts through a resilience lens, and one to identify next steps and resources to advance key priorities. We divided the communities into three cohorts and refined the structure and facilitation strategy for these dialogues from one to the next based on participant feedback. This adaptive method helped participants engage in the dialogues more effectively and develop useful results. We distributed a survey to all participants following each cohort to capture feedback on the use and utility of the dialogues. While there was room for improvement in the program's technical interface, survey participants valued the dialogues and the opportunity to engage as equals. Local leaders said the dialogues helped identify new local pathways to approach resilience priorities. They felt they benefited from focused conversation and personalized introductions to best-matched resources. Practitioners learned how local leaders seek to apply climate science, and how to effectively communicate their expertise to community leaders in support of local planning efforts. We learned there is demand for specialized dialogues on issues like communication, financing and extreme weather. Overall, the desire of participants to continue to engage through this program, and others to enter, indicates that facilitated, open conversations between experts and local leaders can break down communication and access barriers between climate services providers and end

  10. Quality of human-computer interaction - results of a national usability survey of hospital-IT in Germany

    PubMed Central

    2011-01-01

    Background Due to the increasing functionality of medical information systems, it is hard to imagine day to day work in hospitals without IT support. Therefore, the design of dialogues between humans and information systems is one of the most important issues to be addressed in health care. This survey presents an analysis of the current quality level of human-computer interaction of healthcare-IT in German hospitals, focused on the users' point of view. Methods To evaluate the usability of clinical-IT according to the design principles of EN ISO 9241-10 the IsoMetrics Inventory, an assessment tool, was used. The focus of this paper has been put on suitability for task, training effort and conformity with user expectations, differentiated by information systems. Effectiveness has been evaluated with the focus on interoperability and functionality of different IT systems. Results 4521 persons from 371 hospitals visited the start page of the study, while 1003 persons from 158 hospitals completed the questionnaire. The results show relevant variations between different information systems. Conclusions Specialised information systems with defined functionality received better assessments than clinical information systems in general. This could be attributed to the improved customisation of these specialised systems for specific working environments. The results can be used as reference data for evaluation and benchmarking of human computer engineering in clinical health IT context for future studies. PMID:22070880

  11. Multimodality: a basis for augmentative and alternative communication--psycholinguistic, cognitive, and clinical/educational aspects.

    PubMed

    Loncke, Filip T; Campbell, Jamie; England, Amanda M; Haley, Tanya

    2006-02-15

    Message generating is a complex process involving a number of processes, including the selection of modes to use. When expressing a message, human communicators typically use a combination of modes. This phenomenon is often termed multimodality. This article explores the use of models that explain multimodality as an explanatory framework for augmentative and alternative communication (AAC). Multimodality is analysed from a communication, psycholinguistic, and cognitive perspective. Theoretical and applied topics within AAC can be explained or described within the multimodality framework considering iconicity, simultaneous communication, lexical organization, and compatibility of communication modes. Consideration of multimodality is critical to understanding underlying processes in individuals who use AAC and individuals who interact with them.

  12. MCA-NMF: Multimodal Concept Acquisition with Non-Negative Matrix Factorization

    PubMed Central

    Mangin, Olivier; Filliat, David; ten Bosch, Louis; Oudeyer, Pierre-Yves

    2015-01-01

    In this paper we introduce MCA-NMF, a computational model of the acquisition of multimodal concepts by an agent grounded in its environment. More precisely our model finds patterns in multimodal sensor input that characterize associations across modalities (speech utterances, images and motion). We propose this computational model as an answer to the question of how some class of concepts can be learnt. In addition, the model provides a way of defining such a class of plausibly learnable concepts. We detail why the multimodal nature of perception is essential to reduce the ambiguity of learnt concepts as well as to communicate about them through speech. We then present a set of experiments that demonstrate the learning of such concepts from real non-symbolic data consisting of speech sounds, images, and motions. Finally we consider structure in perceptual signals and demonstrate that a detailed knowledge of this structure, named compositional understanding can emerge from, instead of being a prerequisite of, global understanding. An open-source implementation of the MCA-NMF learner as well as scripts and associated experimental data to reproduce the experiments are publicly available. PMID:26489021

  13. Effect of external index of refraction on multimode fiber couplers.

    PubMed

    Wang, G Z; Murphy, K A; Claus, R O

    1995-12-20

    The dependence of the performance of fused-taper multimode fiber couplers on the refractive index of the material surrounding the taper region has been investigated both theoretically and experimentally. It has been identified that for a 2 × 2 multimode fiber coupler there is a range of output-power-coupling ratios for which the effect of the external refractive index is negligible. When the coupler is tapered beyond this region, the performance becomes dependent on the external index of refraction and lossy. To analyze the multimode coupler-loss mechanism, we develop a two-dimensional ray-optics model that incorporates trapped cladding-mode loss and core-mode loss through frustrated total internal reflection.

    Computer-simulation results support the experimental observations. Related issues such as coupler fabrication and packaging are also discussed.

  14. Social Software for Reflective Dialogue: Questions about Reflection and Dialogue in Student Teachers' Blogs

    ERIC Educational Resources Information Center

    Granberg, Carina

    2010-01-01

    This article presents a study of 57 Swedish pre-school student teachers' experiences and achievements in using blogs for reflective dialogue over the course of 2007-2008. In order to examine the extent to which students engaged in reflective dialogue, text analyses of their blogs were carried out. Furthermore, 13 narrative interviews were…

  15. Combined multimodal photoacoustic tomography, optical coherence tomography (OCT) and OCT based angiography system for in vivo imaging of multiple skin disorders in human(Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Mengyang; Chen, Zhe; Sinz, Christoph; Rank, Elisabet; Zabihian, Behrooz; Zhang, Edward Z.; Beard, Paul C.; Kittler, Harald; Drexler, Wolfgang

    2017-02-01

    All optical photoacoustic tomography (PAT) using a planar Fabry-Perot interferometer polymer film sensor has been demonstrated for in vivo human palm imaging with an imaging penetration depth of 5 mm. The relatively larger vessels in the superficial plexus and the vessels in the dermal plexus are visible in PAT. However, due to both resolution and sensitivity limits, all optical PAT cannot reveal the smaller vessels such as capillary loops and venules. Melanin absorption also sometimes causes difficulties in PAT to resolve vessels. Optical coherence tomography (OCT) based angiography, on the other hand, has been proven suitable for microvasculature visualization in the first couple millimeters in human. In our work, we combine an all optical PAT system with an OCT system featuring a phase stable akinetic swept source. This multimodal PAT/OCT/OCT-angiography system provides us co-registered human skin vasculature information as well as the structural information of cutaneous. The scanning units of the sub-systems are assembled into one probe, which is then mounted onto a portable rack. The probe and rack design gives six degrees of freedom, allowing the multimodal optical imaging probe to access nearly all regions of human body. Utilizing this probe, we perform imaging on patients with various skin disorders as well as on healthy controls. Fused PAT/OCT-angiography volume shows the complete blood vessel network in human skin, which is further embedded in the morphology provided by OCT. A comparison between the results from the disordered regions and the normal regions demonstrates the clinical translational value of this multimodal optical imaging system in dermatology.

  16. The WCCES and Intercultural Dialogue: Historical Perspectives and Continuing Challenges

    NASA Astrophysics Data System (ADS)

    Bray, Mark

    2008-07-01

    The World Council of Comparative Education Societies (WCCES) has been strongly concerned with intercultural dialogue since the Council was created in 1970. Indeed advancement of education "for international understanding in the interests of peace, intercultural cooperation, mutual respect among peoples and observance of human rights" is one of the goals built into the WCCES Statutes. This paper begins with a focus on the origins and goals of the WCCES, noting in particular links with the mission of UNESCO. The paper then considers dimensions of evolution in the work of the WCCES in the domain of intercultural dialogue. It underlines the growth of the WCCES and the continuing challenges for securing balanced representation of voices and perspectives.

  17. Multimodal communication in animals, humans and robots: an introduction to perspectives in brain-inspired informatics.

    PubMed

    Wermter, S; Page, M; Knowles, M; Gallese, V; Pulvermüller, F; Taylor, J

    2009-03-01

    Recent years have seen convergence in research on brain mechanisms and neurocomputational approaches, culminating in the creation of a new generation of robots whose artificial "brains" respect neuroscience principles and whose "cognitive" systems venture into higher cognitive domains such as planning and action sequencing, complex object and concept processing, and language. The present article gives an overview of selected projects in this general multidisciplinary field. The work reviewed centres on research funded by the EU in the context of the New and Emergent Science and Technology, NEST, funding scheme highlighting the topic "What it means to be human". Examples of such projects include learning by imitation (Edici project), examining the origin of human rule-based reasoning (Far), studying the neural origins of language (Neurocom), exploring the evolutionary origins of the human mind (Pkb140404), researching into verbal and non-verbal communication (Refcom), using and interpreting signs (Sedsu), characterising human language by structural complexity (Chlasc), and representing abstract concepts (Abstract). Each of the communication-centred research projects revealed individual insights; however, there had been little overall analysis of results and hypotheses. In the Specific Support Action Nestcom, we proposed to analyse some NEST projects focusing on the central question "What it means to communicate" and to review, understand and integrate the results of previous communication-related research, in order to develop and communicate multimodal experimental hypotheses for investigation by future projects. The present special issue includes a range of papers on the interplay between neuroinformatics, brain science and robotics in the general area of higher cognitive functions and multimodal communication. These papers extend talks given at the NESTCOM workshops, at ICANN (http://www.his.sunderland.ac.uk/nestcom/workshop/icann.html) in Porto and at the first

  18. The Quantum Human Computer (QHC) Hypothesis

    ERIC Educational Resources Information Center

    Salmani-Nodoushan, Mohammad Ali

    2008-01-01

    This article attempts to suggest the existence of a human computer called Quantum Human Computer (QHC) on the basis of an analogy between human beings and computers. To date, there are two types of computers: Binary and Quantum. The former operates on the basis of binary logic where an object is said to exist in either of the two states of 1 and…

  19. Historical Text Comprehension Reflective Tutorial Dialogue System

    ERIC Educational Resources Information Center

    Grigoriadou, Maria; Tsaganou, Grammatiki; Cavoura, Theodora

    2005-01-01

    The Reflective Tutorial Dialogue System (ReTuDiS) is a system for learner modelling historical text comprehension through reflective dialogue. The system infers learners' cognitive profiles and constructs their learner models. Based on the learner model the system plans the appropriate--personalized for learners--reflective tutorial dialogue in…

  20. Feedback Dialogues That Stimulate Students' Reflective Thinking

    ERIC Educational Resources Information Center

    Van der Schaaf, Marieke; Baartman, Liesbeth; Prins, Frans; Oosterbaan, Anne; Schaap, Harmen

    2013-01-01

    How can feedback dialogues stimulate students' reflective thinking? This study aims to investigate: (1) the effects of feedback dialogues between teachers and students on students' perceptions of teacher feedback and (2) the relation between features of feedback dialogues and students' thinking activities as part of reflective thinking. A…

  1. Human computer interface guide, revision A

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Human Computer Interface Guide, SSP 30540, is a reference document for the information systems within the Space Station Freedom Program (SSFP). The Human Computer Interface Guide (HCIG) provides guidelines for the design of computer software that affects human performance, specifically, the human-computer interface. This document contains an introduction and subparagraphs on SSFP computer systems, users, and tasks; guidelines for interactions between users and the SSFP computer systems; human factors evaluation and testing of the user interface system; and example specifications. The contents of this document are intended to be consistent with the tasks and products to be prepared by NASA Work Package Centers and SSFP participants as defined in SSP 30000, Space Station Program Definition and Requirements Document. The Human Computer Interface Guide shall be implemented on all new SSFP contractual and internal activities and shall be included in any existing contracts through contract changes. This document is under the control of the Space Station Control Board, and any changes or revisions will be approved by the deputy director.

  2. Utilizing Multi-Modal Literacies in Middle Grades Science

    ERIC Educational Resources Information Center

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  3. Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak

    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

  4. Classroom Dialogue and Science Achievement.

    ERIC Educational Resources Information Center

    Clarke, John A.

    This study reports the application to classroom dialogue of the Thematic and Structural Analysis (TSA) Technique which has been used previously in the analysis of text materials. The TSA Technique identifies themes (word clusters) and their structural relationship throughout sequentially organized material. Dialogues from four Year 8 science…

  5. Automatic analysis of medical dialogue in the home hemodialysis domain: structure induction and summarization.

    PubMed

    Lacson, Ronilda C; Barzilay, Regina; Long, William J

    2006-10-01

    Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.

  6. An Infrastructure for Web-Based Computer Assisted Learning

    ERIC Educational Resources Information Center

    Joy, Mike; Muzykantskii, Boris; Rawles, Simon; Evans, Michael

    2002-01-01

    We describe an initiative under way at Warwick to provide a technical foundation for computer aided learning and computer-assisted assessment tools, which allows a rich dialogue sensitive to individual students' response patterns. The system distinguishes between dialogues for individual problems and the linking of problems. This enables a subject…

  7. The Potential of Computer-Based Expert Systems for Special Educators in Rural Settings.

    ERIC Educational Resources Information Center

    Parry, James D.; Ferrara, Joseph M.

    Knowledge-based expert computer systems are addressing issues relevant to all special educators, but are particularly relevant in rural settings where human experts are less available because of distance and cost. An expert system is an application of artificial intelligence (AI) that typically engages the user in a dialogue resembling the…

  8. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  9. Fourier domain asymmetric cryptosystem for privacy protected multimodal biometric security

    NASA Astrophysics Data System (ADS)

    Choudhury, Debesh

    2016-04-01

    We propose a Fourier domain asymmetric cryptosystem for multimodal biometric security. One modality of biometrics (such as face) is used as the plaintext, which is encrypted by another modality of biometrics (such as fingerprint). A private key is synthesized from the encrypted biometric signature by complex spatial Fourier processing. The encrypted biometric signature is further encrypted by other biometric modalities, and the corresponding private keys are synthesized. The resulting biometric signature is privacy protected since the encryption keys are provided by the human, and hence those are private keys. Moreover, the decryption keys are synthesized using those private encryption keys. The encrypted signatures are decrypted using the synthesized private keys and inverse complex spatial Fourier processing. Computer simulations demonstrate the feasibility of the technique proposed.

  10. Dialogue

    ERIC Educational Resources Information Center

    Jaffe-Notier, Tamara

    2017-01-01

    In this essay, the author asserts that the dialogic pedagogy of Paulo Freire provides a good description of the dynamic of authentic learning. The author narrates one short example and one long example of Freirian dialogues occurring in a public high school, giving details that describe moments of transformation. The short narrative sketches…

  11. Dialogue on Dialogue on Dialogic Pedagogy

    ERIC Educational Resources Information Center

    Sullivan, Paul

    2014-01-01

    It appears that in September, 2011, Rome experienced much more than a dialogue on dialogic pedagogy but a gladiatorial clash of personalities and ideas. Heat, we are told, was generated (above, p.1) and in the dissipation of this heat on to the page, even the reader gets hot and flushed. We are told that arguments "fail" (above, p.16);…

  12. A comparative evaluation plan for the Maintenance, Inventory, and Logistics Planning (MILP) System Human-Computer Interface (HCI)

    NASA Technical Reports Server (NTRS)

    Overmyer, Scott P.

    1993-01-01

    The primary goal of this project was to develop a tailored and effective approach to the design and evaluation of the human-computer interface (HCI) to the Maintenance, Inventory and Logistics Planning (MILP) System in support of the Mission Operations Directorate (MOD). An additional task that was undertaken was to assist in the review of Ground Displays for Space Station Freedom (SSF) by attending the Ground Displays Interface Group (GDIG), and commenting on the preliminary design for these displays. Based upon data gathered over the 10 week period, this project has hypothesized that the proper HCI concept for navigating through maintenance databases for large space vehicles is one based upon a spatial, direct manipulation approach. This dialogue style can be then coupled with a traditional text-based DBMS, after the user has determined the general nature and location of the information needed. This conclusion is in contrast with the currently planned HCI for MILP which uses a traditional form-fill-in dialogue style for all data access and retrieval. In order to resolve this difference in HCI and dialogue styles, it is recommended that comparative evaluation be performed which combines the use of both subjective and objective metrics to determine the optimal (performance-wise) and preferred approach for end users. The proposed plan has been outlined in the previous paragraphs and is available in its entirety in the Technical Report associated with this project. Further, it is suggested that several of the more useful features of the Maintenance Operations Management System (MOMS), especially those developed by the end-users, be incorporated into MILP to save development time and money.

  13. Dialogue Education in the Post-Secondary Classroom: Reflecting on Dialogue Processes from Two Higher Education Settings in North America

    ERIC Educational Resources Information Center

    Gunnlaugson, Olen; Moore, Janet

    2009-01-01

    In this article, educators Olen Gunnlaugson and Janet Moore reflect on their experiences developing and facilitating two dialogue-based courses. They proceed with a brief overview of dialogue education and how they are situating their approaches to dialogue within the field of higher education and in terms of transformative learning. Each then…

  14. NASA Alumni League Dialogue

    NASA Image and Video Library

    2011-03-04

    Former NASA Administrator James Beggs smiles during a dialogue on the future of the space program, Friday, March 4, 2011, at NASA Headquarters in Washington. Beggs was NASA's sixth administrator serving from July 1981 to December 1985. The dialogue was part of the program “The State of the Agency: NASA Future Programs Presentation” sponsored by the NASA Alumni League with support from the AAS, AIAA, CSE and WIA.Photo Credit: (NASA/Paul E. Alers)

  15. Making IBM's Computer, Watson, Human

    PubMed Central

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial “self,” are all incidental to its humanity. PMID:22942530

  16. Making IBM's Computer, Watson, Human.

    PubMed

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial "self," are all incidental to its humanity.

  17. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

    PubMed

    Crouser, R J; Chang, R

    2012-12-01

    Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.

  18. Volume curtaining: a focus+context effect for multimodal volume visualization

    NASA Astrophysics Data System (ADS)

    Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross

    2014-03-01

    In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.

  19. Computers and the landscape

    Treesearch

    Gary H. Elsner

    1979-01-01

    Computers can analyze and help to plan the visual aspects of large wildland landscapes. This paper categorizes and explains current computer methods available. It also contains a futuristic dialogue between a landscape architect and a computer.

  20. A new piezoelectric energy harvesting design concept: multimodal energy harvesting skin.

    PubMed

    Lee, Soobum; Youn, Byeng D

    2011-03-01

    This paper presents an advanced design concept for a piezoelectric energy harvesting (EH), referred to as multimodal EH skin. This EH design facilitates the use of multimodal vibration and enhances power harvesting efficiency. The multimodal EH skin is an extension of our previous work, EH skin, which was an innovative design paradigm for a piezoelectric energy harvester: a vibrating skin structure and an additional thin piezoelectric layer in one device. A computational (finite element) model of the multilayered assembly - the vibrating skin structure and piezoelectric layer - is constructed and the optimal topology and/or shape of the piezoelectric layer is found for maximum power generation from multiple vibration modes. A design rationale for the multimodal EH skin was proposed: designing a piezoelectric material distribution and external resistors. In the material design step, the piezoelectric material is segmented by inflection lines from multiple vibration modes of interests to minimize voltage cancellation. The inflection lines are detected using the voltage phase. In the external resistor design step, the resistor values are found for each segment to maximize power output. The presented design concept, which can be applied to any engineering system with multimodal harmonic-vibrating skins, was applied to two case studies: an aircraft skin and a power transformer panel. The excellent performance of multimodal EH skin was demonstrated, showing larger power generation than EH skin without segmentation or unimodal EH skin.

  1. Imre Lakatos's Use of Dialogue.

    ERIC Educational Resources Information Center

    Greig, Judith Maxwell

    This paper uses a book, "Proofs and Refutations: The Logic of Mathematical Discovery," as an example of Lakatos's use of dialogue. The book was originally adapted from his dissertation and influenced by Polya and Popper. His discussion of the Euler conjecture is summarized. Three purposes for choosing the dialogue form for the book were…

  2. Multimodal Event Detection in Twitter Hashtag Networks

    DOE PAGES

    Yilmaz, Yasin; Hero, Alfred O.

    2016-07-01

    In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less

  3. Jim and Dave: A Dialogue.

    ERIC Educational Resources Information Center

    Doud, Robert E.

    This is a fictional dialogue intended to honor Jim Kingman and David Leary, both professors of history who retired after long careers at Pasadena City College in California (PCC). The dialogue hypothesizes the observations of both men as they look on the honorary gold plates of previous retirees that decorate the wall of a PCC public dining hall.…

  4. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  5. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

  6. Peer work in Open Dialogue: A discussion paper.

    PubMed

    Bellingham, Brett; Buus, Niels; McCloughen, Andrea; Dawson, Lisa; Schweizer, Richard; Mikes-Liu, Kristof; Peetz, Amy; Boydell, Katherine; River, Jo

    2018-03-25

    Open Dialogue is a resource-oriented approach to mental health care that originated in Finland. As Open Dialogue has been adopted across diverse international healthcare settings, it has been adapted according to contextual factors. One important development in Open Dialogue has been the incorporation of paid, formal peer work. Peer work draws on the knowledge and wisdom gained through lived experience of distress and hardship to establish mutual, reciprocal, and supportive relationships with service users. As Open Dialogue is now being implemented across mental health services in Australia, stakeholders are beginning to consider the role that peer workers might have in this model of care. Open Dialogue was not, initially, conceived to include a specific role for peers, and there is little available literature, and even less empirical research, in this area. This discussion paper aims to surface some of the current debates and ideas about peer work in Open Dialogue. Examples and models of peer work in Open Dialogue are examined, and the potential benefits and challenges of adopting this approach in health services are discussed. Peer work in Open Dialogue could potentially foster democracy and disrupt clinical hierarchies, but could also move peer work from reciprocal to a less symmetrical relationship of 'giver' and 'receiver' of care. Other models of care, such as lived experience practitioners in Open Dialogue, can be conceived. However, it remains uncertain whether the hierarchical structures in healthcare and current models of funding would support any such models. © 2018 Australian College of Mental Health Nurses Inc.

  7. NASA Alumni League Dialogue

    NASA Image and Video Library

    2011-03-04

    Former NASA Administrator James Beggs is seen during a dialogue with present NASA Administrator Charles Bolden on the future of the space program, Friday, March 4, 2011, at NASA Headquarters in Washington. Beggs was NASA's sixth administrator serving from July 1981 to December 1985. The dialogue was part of the program “The State of the Agency: NASA Future Programs Presentation” sponsored by the NASA Alumni League with support from the AAS, AIAA, CSE and WIA.Photo Credit: (NASA/Paul E. Alers)

  8. Occupational stress in human computer interaction.

    PubMed

    Smith, M J; Conway, F T; Karsh, B T

    1999-04-01

    There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.

  9. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  10. Pollution going multimodal: the complex impact of the human-altered sensory environment on animal perception and performance

    PubMed Central

    Halfwerk, Wouter; Slabbekoorn, Hans

    2015-01-01

    Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. PMID:25904319

  11. Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain

    PubMed Central

    Quiroga, Rodrigo Quian; Kraskov, Alexander; Koch, Christof; Fried, Itzhak

    2010-01-01

    Summary Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol’s famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the “ventral pathway,” via both single-cell recordings in monkeys [1, 2] and functional imaging in humans [3, 4]. Interestingly, in humans, the same “concept” of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices [5, 6] and visual word forms [7, 8]. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities. PMID:19631538

  12. Human Expertise Helps Computer Classify Images

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  13. Students' Multimodal Construction of the Work-Energy Concept

    NASA Astrophysics Data System (ADS)

    Tang, Kok-Sing; Chee Tan, Seng; Yeo, Jennifer

    2011-09-01

    This article examines the role of multimodalities in representing the concept of work-energy by studying the collaborative discourse of a group of ninth-grade physics students engaging in an inquiry-based instruction. Theorising a scientific concept as a network of meaning relationships across semiotic modalities situated in human activity, this article analyses the students' interactions through their use of natural language, mathematical symbolism, depiction, and gestures, and examines the intertextual meanings made through the integration of these modalities. Results indicate that the thematic integration of multimodalities is both difficult and necessary for students in order to construct a scientific understanding that is congruent with the physics curriculum. More significantly, the difficulties in multimodal integration stem from the subtle differences in the categorical, quantitative, and spatial meanings of the work-energy concept whose contrasts are often not made explicit to the students. The implications of these analyses and findings for science teaching and educational research are discussed.

  14. Conversational evidence in therapeutic dialogue.

    PubMed

    Strong, Tom; Busch, Robbie; Couture, Shari

    2008-07-01

    Family therapists' participation in therapeutic dialogue with clients is typically informed by evidence of how such dialogue is developing. In this article, we propose that conversational evidence, the kind that can be empirically analyzed using discourse analyses, be considered a contribution to widening psychotherapy's evidence base. After some preliminaries about what we mean by conversational evidence, we provide a genealogy of evaluative practice in psychotherapy, and examine qualitative evaluation methods for their theoretical compatibilities with social constructionist approaches to family therapy. We then move on to examine the notion of accomplishment in therapeutic dialogue given how such accomplishments can be evaluated using conversation analysis. We conclude by considering a number of research and pedagogical implications we associate with conversational evidence.

  15. Ubiquitous human computing.

    PubMed

    Zittrain, Jonathan

    2008-10-28

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  16. Use of Multi-Modal Media and Tools in an Online Information Literacy Course: College Students' Attitudes and Perceptions

    ERIC Educational Resources Information Center

    Chen, Hsin-Liang; Williams, James Patrick

    2009-01-01

    This project studies the use of multi-modal media objects in an online information literacy class. One hundred sixty-two undergraduate students answered seven surveys. Significant relationships are found among computer skills, teaching materials, communication tools and learning experience. Multi-modal media objects and communication tools are…

  17. The "Motherese" of Mr. Rogers: A Description of the Dialogue of Educational Television Programs.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Haight, Patti L.

    Dialogue from 30-minute samples from "Sesame Street" and "Mr. Rogers' Neighborhood" was coded for grammar, content, and discourse. Grammatical analysis used the LINGQUEST computer-assisted language assessment program (Mordecai, Palen, and Palmer 1982). Content coding was based on categories developed by Rice (1984) and…

  18. Interreligious Dialogue in Schools: Beyond Asymmetry and Categorisation?

    ERIC Educational Resources Information Center

    Riitaoja, Anna-Leena; Dervin, Fred

    2014-01-01

    Interreligious dialogue is a central objective in European and UNESCO policy and research documents, in which educational institutions are seen as central places for dialogue. In this article, we discuss this type of dialogue under the conditions of asymmetry and categorisation in two Finnish schools. Finnish education has often been lauded for…

  19. The Socratic Dialogue and Teacher Education

    ERIC Educational Resources Information Center

    Knezic, Dubravka; Wubbels, Theo; Elbers, Ed; Hajer, Maaike

    2010-01-01

    This article argues that the Socratic Dialogue in the Nelson and Heckmann tradition will prove a considerable contribution in training teachers. A review of the literature and empirical research supports the claim that the Socratic Dialogue promotes student teachers' interpersonal sensitivity while stimulating conceptual understanding. The article…

  20. Some Features of Dialogue between Twins

    ERIC Educational Resources Information Center

    Savic, Svenka; Jocic, Mirjana

    1975-01-01

    Dialogues of sets of socially similar twins are studied. The opinion that twins have slower syntactic development than non-twins is seriously questioned. Dialogues with twins saying the same utterance together, correcting each other, quarreling, playing verbal games, etc. are analyzed in their deep structure. (SCC)

  1. Multiscale and multi-modality visualization of angiogenesis in a human breast cancer model

    PubMed Central

    Cebulla, Jana; Kim, Eugene; Rhie, Kevin; Zhang, Jiangyang

    2017-01-01

    Angiogenesis in breast cancer helps fulfill the metabolic demands of the progressing tumor and plays a critical role in tumor metastasis. Therefore, various imaging modalities have been used to characterize tumor angiogenesis. While micro-CT (μCT) is a powerful tool for analyzing the tumor microvascular architecture at micron-scale resolution, magnetic resonance imaging (MRI) with its sub-millimeter resolution is useful for obtaining in vivo vascular data (e.g. tumor blood volume and vessel size index). However, integration of these microscopic and macroscopic angiogenesis data across spatial resolutions remains challenging. Here we demonstrate the feasibility of ‘multiscale’ angiogenesis imaging in a human breast cancer model, wherein we bridge the resolution gap between ex vivo μCT and in vivo MRI using intermediate resolution ex vivo MR microscopy (μMRI). To achieve this integration, we developed suitable vessel segmentation techniques for the ex vivo imaging data and co-registered the vascular data from all three imaging modalities. We showcase two applications of this multiscale, multi-modality imaging approach: (1) creation of co-registered maps of vascular volume from three independent imaging modalities, and (2) visualization of differences in tumor vasculature between viable and necrotic tumor regions by integrating μCT vascular data with tumor cellularity data obtained using diffusion-weighted MRI. Collectively, these results demonstrate the utility of ‘mesoscopic’ resolution μMRI for integrating macroscopic in vivo MRI data and microscopic μCT data. Although focused on the breast tumor xenograft vasculature, our imaging platform could be extended to include additional data types for a detailed characterization of the tumor microenvironment and computational systems biology applications. PMID:24719185

  2. NASA Alumni League Dialogue

    NASA Image and Video Library

    2011-03-04

    Former NASA Administrator James Beggs, left, and present NASA Administrator Charles Bolden conduct a dialogue on the future of the space program, Friday, March 4, 2011, at NASA Headquarters in Washington. Beggs was NASA's sixth administrator serving from July 1981 to December 1985. Bolden took over the post as NASA's 12th administrator in July 2009. The dialogue is part of the program “The State of the Agency: NASA Future Programs Presentation” sponsored by the NASA Alumni League with support from the AAS, AIAA, CSE and WIA.Photo Credit: (NASA/Paul E. Alers)

  3. Multi-Modalities Sensor Science

    DTIC Science & Technology

    2015-02-28

    enhanced multi-mode sensor science. bio -sensing, cross-discipling, multi-physics, nano-technology sailing He +46-8790 8465 1 Final Report for SOARD Project...spectroscopy, nano-technology, biophotonics and multi-physics modeling to produce adaptable bio -nanostructure enhanced multi-mode sensor science. 1...adaptable bio -nanostructure enhanced multi-mode sensor science. The accomplishments includes 1) A General Method for Designing a Radome to Enhance

  4. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  5. Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.

    2016-03-01

    Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.

  6. Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.

    2016-04-01

    We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.

  7. Contesting the Constitution: The Constitutional Dialogues.

    ERIC Educational Resources Information Center

    Hilenski, Ferdinand Alexi

    This historical dramatization, prepared for presentation at the 1985 Wyoming Chatauqua, contains three dialogues, set during the administration of President Thomas Jefferson and presenting the issues surrounding the drafting and ratification of the U.S. Constitution. The dialogues are designed to be presented in three segments to permit discussion…

  8. Multiscale climate emulator of multimodal wave spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, Ana; Hegermiller, Christie A.; Antolinez, Jose A. A.; Camus, Paula; Vitousek, Sean; Ruggiero, Peter; Barnard, Patrick L.; Erikson, Li H.; Tomás, Antonio; Mendez, Fernando J.

    2017-02-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this complex problem tractable using computationally expensive numerical models. So far, the skill of statistical-downscaling model-based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  9. Multiscale Climate Emulator of Multimodal Wave Spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, A.; Hegermiller, C.; Alvarez Antolinez, J. A.; Camus, P.; Vitousek, S.; Ruggiero, P.; Barnard, P.; Erikson, L. H.; Tomas, A.; Mendez, F. J.

    2016-12-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this problem complex yet tractable using computationally-expensive numerical models. So far, the skill of statistical-downscaling models based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical-downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the Southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  10. Dialogue-Games: Meta-Communication Structures for Natural Language Interaction

    DTIC Science & Technology

    1977-01-01

    Dialogue- games are only those described here. For example, they are not necessarily competitive , consciously pursued, or zero-sum. 3. THE DIALOGUE- GAME ...fr«. CO / (Mt l / H- James A. Levin James A. Moore ARPA ORDER NO. 2930 NR 134 374 ISI/RR 77-53 January 1977 Dialogue Games : Meta...these patterns. These patterns have been represented by a set of knowledge structures called Dialogue- games , capturing shared conventional Knowledge

  11. Multimodality Instrument for Tissue Characterization

    NASA Technical Reports Server (NTRS)

    Mah, Robert W. (Inventor); Andrews, Russell J. (Inventor)

    2000-01-01

    A system with multimodality instrument for tissue identification includes a computer-controlled motor driven heuristic probe with a multisensory tip is discussed. For neurosurgical applications, the instrument is mounted on a stereotactic frame for the probe to penetrate the brain in a precisely controlled fashion. The resistance of the brain tissue being penetrated is continually monitored by a miniaturized strain gauge attached to the probe tip. Other modality sensors may be mounted near the probe tip to provide real-time tissue characterizations and the ability to detect the proximity of blood vessels, thus eliminating errors normally associated with registration of pre-operative scans, tissue swelling, elastic tissue deformation, human judgement, etc., and rendering surgical procedures safer, more accurate, and efficient. A neural network, program adaptively learns the information on resistance and other characteristic features of normal brain tissue during the surgery and provides near real-time modeling. A fuzzy logic interface to the neural network program incorporates expert medical knowledge in the learning process. Identification of abnormal brain tissue is determined by the detection of change and comparison with previously learned models of abnormal brain tissues. The operation of the instrument is controlled through a user friendly graphical interface. Patient data is presented in a 3D stereographics display. Acoustic feedback of selected information may optionally be provided. Upon detection of the close proximity to blood vessels or abnormal brain tissue, the computer-controlled motor immediately stops probe penetration.

  12. Three Modes of Dialogue about Works of Art

    ERIC Educational Resources Information Center

    Hubard, Olga M.

    2010-01-01

    Over the last two decades, art teachers and museum educators have increasingly embraced group dialogue to help students make meaning from works of art. To an outside observer, most dialogues about art could appear to be the same. Nevertheless, careful analysis reveals that the spirit and dynamics can differ greatly from one dialogue to the next.…

  13. Lessons learnt from the Climate Dialogue initiative

    NASA Astrophysics Data System (ADS)

    Crok, Marcel; Strengers, Bart; Vasileiadou, Eleftheria

    2015-04-01

    The weblog Climate Dialogue (climatedialogue.org) has been an experimental climate change communication project. It was the result of a motion in the Dutch parliament, which asked the Dutch government "to also involve climate sceptics in future studies on climate change". Climate Dialogue was set up by the Royal Netherlands Meteorological Institute (KNMI), the Netherlands Environmental Assessment Agency (PBL), and Dutch science journalist Marcel Crok. It operated for slightly more than two years (From November 2012 till December 2014). Around 20 climate scientists from all over the world, many of them leading in their respective fields, participated in six dialogues. Climate Dialogue was a moderated blog on controversial climate science topics introducing a combination of several novel elements: a) bringing together scientists with widely separated viewpoints b) strict moderation of the discussion and c) compilation of executive and extended summaries of the discussions that were approved by the invited scientists. In our talk, we will discuss the operation and results of the Climate Dialogue project, focusing more explicitly on the lessons learnt with respect to online climate change communication addressing the question: "To what extent can online climate change communication bring together climate scientists with widely separated viewpoints, and what would be the advantage of such communication practice?" We identify how Climate Dialogue was received and perceived by the participating scientists, but also by different scientific and online communities. Finally, we present our ideas on how Climate Dialogue could evolve in a novel way of contributing to (climate) science and what steps would be necessary and/or beneficial for such a platform to survive and succeed.

  14. Pollution going multimodal: the complex impact of the human-altered sensory environment on animal perception and performance.

    PubMed

    Halfwerk, Wouter; Slabbekoorn, Hans

    2015-04-01

    Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  15. Detection of human brain tumor infiltration with multimodal multiscale optical analysis

    NASA Astrophysics Data System (ADS)

    Poulon, Fanny; Metais, Camille; Jamme, Frederic; Zanello, Marc; Varlet, Pascale; Devaux, Bertrand; Refregiers, Matthieu; Abi Haidar, Darine

    2017-02-01

    Brain tumor surgeries are facing major challenges to improve patients' quality of life. The extent of resection while preserving surrounding eloquent brain areas is necessary to equilibrate the onco-functional. A tool able to increase the accuracy of tissue analysis and to deliver an immediate diagnostic on tumor, could drastically improve actual surgeries and patient survival rates. To achieve such performances a complete optical study, ranging from ultraviolet to infrared, of biopsies has been started by our group. Four different contrasts were used: 1) spectral analysis covering the DUV to IR range, 2) two photon fluorescence lifetime imaging and one photon time domain measurement, 3) second harmonic generation imaging and 4) fluorescence imaging using DUV to IR, one and two photon excitation. All these measurements were done on the endogenous fluorescence of tissues to avoid any bias and further clinical complication due to the introduction of external markers. The different modalities are then crossed to build a matrix of criteria to discriminate tumorous tissues. The results of multimodal optical analysis on human biopsies were compared to the gold standard histopathology.

  16. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  17. Combining kriging, multispectral and multimodal microscopy to resolve malaria-infected erythrocyte contents.

    PubMed

    Dabo-Niang, S; Zoueu, J T

    2012-09-01

    In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  18. Semi-quantum communication: protocols for key agreement, controlled secure direct communication and dialogue

    NASA Astrophysics Data System (ADS)

    Shukla, Chitra; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first-time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communication and dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.

  19. Travels With Gates: Shangri-La Dialogue

    Science.gov Websites

    Department of Defense Submit Search Shangri-La Dialogue June 2010 Germany Belgium England Azerbaijan . Gates had with U.S. partners as part of the annual Asia security summit known as the "Shangri-La meetings at the "Shangri-La Dialogue" security summit. Story U.S.-China Military Ties Need Work

  20. Multimode Directional Coupler

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N. (Inventor); Wintucky, Edwin G. (Inventor)

    2016-01-01

    A multimode directional coupler is provided. In some embodiments, the multimode directional coupler is configured to receive a primary signal and a secondary signal at a first port of a primary waveguide. The primary signal is configured to propagate through the primary waveguide and be outputted at a second port of the primary waveguide. The multimode directional coupler also includes a secondary waveguide configured to couple the secondary signal from the primary waveguide with no coupling of the primary signal into the secondary waveguide. The secondary signal is configured to propagate through the secondary waveguide and be outputted from a port of the secondary waveguide.

  1. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  2. Designing an Automated Assessment of Public Speaking Skills Using Multimodal Cues

    ERIC Educational Resources Information Center

    Chen, Lei; Feng, Gary; Leong, Chee Wee; Joe, Jilliam; Kitchen, Christopher; Lee, Chong Min

    2016-01-01

    Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of…

  3. An innovative multimodal virtual platform for communication with devices in a natural way

    NASA Astrophysics Data System (ADS)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  4. Multimodality image display station

    NASA Astrophysics Data System (ADS)

    Myers, H. Joseph

    1990-07-01

    The Multi-modality Image Display Station (MIDS) is designed for the use of physicians outside of the radiology department. Connected to a local area network or a host computer, it provides speedy access to digitized radiology images and written diagnostics needed by attending and consulting physicians near the patient bedside. Emphasis has been placed on low cost, high performance and ease of use. The work is being done as a joint study with the University of Texas Southwestern Medical Center at Dallas, and as part of a joint development effort with the Mayo Clinic. MIDS is a prototype, and should not be assumed to be an IBM product.

  5. Instructional Dialogue: Distance Education Students' Dialogic Behaviour

    ERIC Educational Resources Information Center

    Caspi, Avner; Gorsky, Paul

    2006-01-01

    Instructional systems, both distance education and campus-based, may be viewed in terms of intrapersonal and interpersonal "instructional dialogues," that mediate and facilitate learning respectively, and "instructional resources" that enable such dialogues. Resources include self-instruction texts, tutorials, instructor…

  6. Studying and Facilitating Dialogue in Select Online Management Courses

    ERIC Educational Resources Information Center

    Ivancevich, John M.; Gilbert, Jacqueline A.; Konopaske, Robert

    2009-01-01

    Dialogue is arguably one of the most significant elements of learning in higher education. The premise of this article is that online instructors can creatively facilitate dialogue for effectively teaching online management courses. This article presents a dialogue-focused framework for addressing significant behavioral, structural, and…

  7. Teacher-Student Dialogue: Transforming Teacher Interpersonal Behaviour and Pedagogical Praxis through Co-Teaching and Co-Generative Dialogue

    ERIC Educational Resources Information Center

    Rahmawati, Yuli; Koul, Rekha; Fisher, Darrell

    2015-01-01

    The paper reports a study of the effectiveness of co-teaching and co-generative dialogue in science learning and teaching in lower secondary science classes. The idea of co-teaching and co-generative dialogue--first proposed by two leading educationists, Roth and Tobin, in early 2000--made an international impact in educational research. In the…

  8. Fast and robust multimodal image registration using a local derivative pattern.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Chen, Xinrong; Wang, Manning; Song, Zhijian

    2017-02-01

    Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. The results reveal that dLDP can achieve superior performance regarding both accuracy and

  9. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  10. Camp Minden Dialogue

    EPA Pesticide Factsheets

    The Minden Dialogue Committee is made up of a group of individual volunteer citizens, community leaders, local and statewide organizations, scientists, elected officials and state representatives that will look at alternatives to address onsite materials.

  11. Human-computer interaction: psychological aspects of the human use of computing.

    PubMed

    Olson, Gary M; Olson, Judith S

    2003-01-01

    Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.

  12. Radiolabeled Nanoparticles for Multimodality Tumor Imaging

    PubMed Central

    Xing, Yan; Zhao, Jinhua; Conti, Peter S.; Chen, Kai

    2014-01-01

    Each imaging modality has its own unique strengths. Multimodality imaging, taking advantages of strengths from two or more imaging modalities, can provide overall structural, functional, and molecular information, offering the prospect of improved diagnostic and therapeutic monitoring abilities. The devices of molecular imaging with multimodality and multifunction are of great value for cancer diagnosis and treatment, and greatly accelerate the development of radionuclide-based multimodal molecular imaging. Radiolabeled nanoparticles bearing intrinsic properties have gained great interest in multimodality tumor imaging over the past decade. Significant breakthrough has been made toward the development of various radiolabeled nanoparticles, which can be used as novel cancer diagnostic tools in multimodality imaging systems. It is expected that quantitative multimodality imaging with multifunctional radiolabeled nanoparticles will afford accurate and precise assessment of biological signatures in cancer in a real-time manner and thus, pave the path towards personalized cancer medicine. This review addresses advantages and challenges in developing multimodality imaging probes by using different types of nanoparticles, and summarizes the recent advances in the applications of radiolabeled nanoparticles for multimodal imaging of tumor. The key issues involved in the translation of radiolabeled nanoparticles to the clinic are also discussed. PMID:24505237

  13. Prosodic alignment in human-computer interaction

    NASA Astrophysics Data System (ADS)

    Suzuki, N.; Katagiri, Y.

    2007-06-01

    Androids that replicate humans in form also need to replicate them in behaviour to achieve a high level of believability or lifelikeness. We explore the minimal social cues that can induce in people the human tendency for social acceptance, or ethopoeia, toward artifacts, including androids. It has been observed that people exhibit a strong tendency to adjust to each other, through a number of speech and language features in human-human conversational interactions, to obtain communication efficiency and emotional engagement. We investigate in this paper the phenomena related to prosodic alignment in human-computer interactions, with particular focus on human-computer alignment of speech characteristics. We found that people exhibit unidirectional and spontaneous short-term alignment of loudness and response latency in their speech in response to computer-generated speech. We believe this phenomenon of prosodic alignment provides one of the key components for building social acceptance of androids.

  14. Polarization-Sensitive Hyperspectral Imaging in vivo: A Multimode Dermoscope for Skin Analysis

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf B.; Durkin, Anthony J.; Chave, Robert; Lindsley, Erik H.; Farkas, Daniel L.

    2014-05-01

    Attempts to understand the changes in the structure and physiology of human skin abnormalities by non-invasive optical imaging are aided by spectroscopic methods that quantify, at the molecular level, variations in tissue oxygenation and melanin distribution. However, current commercial and research systems to map hemoglobin and melanin do not correlate well with pathology for pigmented lesions or darker skin. We developed a multimode dermoscope that combines polarization and hyperspectral imaging with an efficient analytical model to map the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models. For this system's proof of concept, human skin measurements on melanocytic nevus, vitiligo, and venous occlusion conditions were performed in volunteers. The resulting molecular distribution maps matched physiological and anatomical expectations, confirming a technologic approach that can be applied to next generation dermoscopes and having biological plausibility that is likely to appeal to dermatologists.

  15. Relating Dialogue and Dialectics: A Philosophical Perspective

    ERIC Educational Resources Information Center

    Dafermos, Manolis

    2018-01-01

    Dialectics and a dialogical approach constitute two distinct theoretical frameworks with long intellectual histories. The question of relations between dialogue and dialectics provokes discussions in academic communities. The present paper highlights the need to clarify the concepts "dialogue" and "dialectics" and explore their…

  16. Potential of Cognitive Computing and Cognitive Systems

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2015-01-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  17. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  18. Construction of Multimodal Transport Information Platform

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Cheng, Yu; Zhao, Zhi

    2018-06-01

    With the rapid development of economy, the volume of transportation in China is increasing, the opening process of the market is accelerating, the scale of enterprises is expanding, the service quality is being improved, and the container multimodal transport is developing continuously.The hardware infrastructure of container multimodal transport is improved obviously, but the network platform construction of multimodal transport is still insufficient.Taking Shandong region of China as an example, the present situation of container multimodal transport in Shandong area can no longer meet the requirement of rapid development of container, and the construction of network platform needs to be solved urgently. Therefore, this paper will briefly describe the conception of construction of multimodal transport network platform in Shandong area.In order to achieve the rapid development of multimodal transport.

  19. Toward in vivo diagnosis of skin cancer using multimode imaging dermoscopy: (II) molecular mapping of highly pigmented lesions

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.

    2014-03-01

    We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.

  20. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2014-10-01

    1 AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR...TYPE Annual 3. DATES COVERED 30 Sept 2013 – 29 Oct 2014 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Tinnitus Multimodal Imaging...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory

  1. Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception.

    PubMed

    Bänziger, Tanja; Mortillaro, Marcello; Scherer, Klaus R

    2012-10-01

    Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

  2. Multimodal Counseling Interventions: Effect on Human Papilloma Virus Vaccination Acceptance

    PubMed Central

    Salisbury, Helen; Bay, Curtis

    2017-01-01

    Human papilloma virus (HPV) vaccine was developed to reduce HPV-attributable cancers, external genital warts (EGW), and recurrent respiratory papillomatosis. Adolescent HPV vaccination series completion rates are less than 40% in the United States of America, but up to 80% in Australia and the United Kingdom. Population-based herd immunity requires 80% or greater vaccination series completion rates. Pro-vaccination counseling facilitates increased vaccination rates. Multimodal counseling interventions may increase HPV vaccination series non-completers’ HPV-attributable disease knowledge and HPV-attributable disease prophylaxis (vaccination) acceptance over a brief 14-sentence counseling intervention. An online, 4-group, randomized controlled trial, with 260 or more participants per group, found that parents were more likely to accept HPV vaccination offers for their children than were childless young adults for themselves (68.2% and 52.9%). A combined audiovisual and patient health education handout (PHEH) intervention raised knowledge of HPV vaccination purpose, p = 0.02, and HPV vaccination acceptance for seven items, p < 0.001 to p = 0.023. The audiovisual intervention increased HPV vaccination acceptance for five items, p < 0.001 to p = 0.006. That HPV causes EGW, and that HPV vaccination prevents HPV-attributable diseases were better conveyed by the combined audiovisual and PHEH than the control 14-sentence counseling intervention alone. PMID:29113137

  3. Multimodal Counseling Interventions: Effect on Human Papilloma Virus Vaccination Acceptance.

    PubMed

    Nwanodi, Oroma; Salisbury, Helen; Bay, Curtis

    2017-11-06

    Human papilloma virus (HPV) vaccine was developed to reduce HPV-attributable cancers, external genital warts (EGW), and recurrent respiratory papillomatosis. Adolescent HPV vaccination series completion rates are less than 40% in the United States of America, but up to 80% in Australia and the United Kingdom. Population-based herd immunity requires 80% or greater vaccination series completion rates. Pro-vaccination counseling facilitates increased vaccination rates. Multimodal counseling interventions may increase HPV vaccination series non-completers' HPV-attributable disease knowledge and HPV-attributable disease prophylaxis (vaccination) acceptance over a brief 14-sentence counseling intervention. An online, 4-group, randomized controlled trial, with 260 or more participants per group, found that parents were more likely to accept HPV vaccination offers for their children than were childless young adults for themselves (68.2% and 52.9%). A combined audiovisual and patient health education handout (PHEH) intervention raised knowledge of HPV vaccination purpose, p = 0.02, and HPV vaccination acceptance for seven items, p < 0.001 to p = 0.023. The audiovisual intervention increased HPV vaccination acceptance for five items, p < 0.001 to p = 0.006. That HPV causes EGW, and that HPV vaccination prevents HPV-attributable diseases were better conveyed by the combined audiovisual and PHEH than the control 14-sentence counseling intervention alone.

  4. Diversity Initiatives in Higher Education: Intergroup Dialogue as Pedagogy across the Curriculum

    ERIC Educational Resources Information Center

    Clark, Christine

    2005-01-01

    The idea for the Intergroup Dialogue as Pedagogy Across the Curriculum (INTERACT) Pilot Project emerged, quite organically, from the cross-pollination of two major initiatives of the Office of Human Relations Programs (OHRP), the equity compliance and multicultural education arm of the Office of the President at the University of Maryland, College…

  5. Using intergroup dialogue to promote social justice and change.

    PubMed

    Dessel, Adrienne; Rogge, Mary E; Garlington, Sarah B

    2006-10-01

    Intergroup dialogue is a public process designed to involve individuals and groups in an exploration of societal issues such as politics, racism, religion, and culture that are often flashpoints for polarization and social conflict. This article examines intergroup dialogue as a bridging mechanism through which social workers in clinical, other direct practice, organizer, activist, and other roles across the micro-macro practice spectrum can engage with people in conflict to advance advocacy, justice, and social change. We define intergroup dialogue and provide examples in not-for-profit or community-based and academic settings of how intergroup dialogue has been applied to conflicts around topics of race and ethnic nationality, sexual orientation, religion, and culture. We recommend practice-, policy-, and research-related actions that social workers can take to understand and use intergroup dialogue.

  6. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    PubMed

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data

  7. Dialogue and Its Conditions: The Construction of European Citizenship

    ERIC Educational Resources Information Center

    Hodgson, Naomi

    2011-01-01

    The Council of Europe's "White Paper on Intercultural Dialogue" provides an example of the way in which dialogue has become part of the current mode of governance in Europe. Throughout current policy, the terms "dialogue" and "voice" inform the introduction of practices and tools that constitute the citizen, or active learning citizen. Notions of…

  8. Sharing Solutions: Persistence and Grounding in Multimodal Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre; Traum, David

    2006-01-01

    This article reports on an exploratory study of the relationship between grounding and problem solving in multimodal computer-mediated collaboration. This article examines two different media, a shared whiteboard and a MOO environment that includes a text chat facility. A study was done on how the acknowledgment rate (how often partners give…

  9. Multimodal browsing using VoiceXML

    NASA Astrophysics Data System (ADS)

    Caccia, Giuseppe; Lancini, Rosa C.; Peschiera, Giuseppe

    2003-06-01

    With the increasing development of devices such as personal computers, WAP and personal digital assistants connected to the World Wide Web, end users feel the need to browse the Internet through multiple modalities. We intend to investigate on how to create a user interface and a service distribution platform granting the user access to the Internet through standard I/O modalities and voice simultaneously. Different architectures are evaluated suggesting the more suitable for each client terminal (PC o WAP). In particular the design of the multimodal usermachine interface considers the synchronization issue between graphical and voice contents.

  10. Choosing to Interact: Exploring the Relationship between Learner Personality, Attitudes, and Tutorial Dialogue Participation

    ERIC Educational Resources Information Center

    Ezen-Can, Aysu; Boyer, Kristy Elizabeth

    2015-01-01

    The tremendous effectiveness of intelligent tutoring systems is due in large part to their interactivity. However, when learners are free to choose the extent to which they interact with a tutoring system, not all learners do so actively. This paper examines a study with a natural language tutorial dialogue system for computer science, in which…

  11. Appearance-based multimodal human tracking and identification for healthcare in the digital home.

    PubMed

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-08-05

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  12. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    PubMed Central

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-01-01

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207

  13. Enhancing Learning through Human Computer Interaction

    ERIC Educational Resources Information Center

    McKay, Elspeth, Ed.

    2007-01-01

    Enhancing Learning Through Human Computer Interaction is an excellent reference source for human computer interaction (HCI) applications and designs. This "Premier Reference Source" provides a complete analysis of online business training programs and e-learning in the higher education sector. It describes a range of positive outcomes for linking…

  14. Educational Application of Dialogue System To Support e-Learning.

    ERIC Educational Resources Information Center

    Kim, Youn-Gi; Lee, Chul-Hwan; Han, Sun-Gwan

    This study is on the design and implementation of an educational dialogue system to support e-learning. The learning domain to apply the dialogue system used the subject of geometry. The knowledge in the dialogue-based system for learning geometry was created and represented by XML-based AIML. The implemented system in this study can understand…

  15. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  16. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    PubMed

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules. © 2015 Wiley Periodicals, Inc.

  17. Promoting a Dialogue between Neuroscience and Education

    ERIC Educational Resources Information Center

    Turner, David A.

    2011-01-01

    There have been a number of calls for a 'dialogue' between neuroscience and education. However, 'dialogue' implies an equal conversation between partners. The outcome of collaboration between neuroscientists and educators not normally expected to be so balanced. Educationists are expected to learn from neuroscience how to conduct research with…

  18. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2014-01-01

    comprehensive multimodal dataset and a face database are described in section V. Finally, in section VI, we discuss the computational complexity of...fingerprint, iris, palmprint , hand geometry and voice from subjects of different age, gender and ethnicity as described in Table I. It is a...Taylor, “Constructing nonlinear discriminants from multiple data views,” Machine Learning and Knowl- edge Discovery in Databases , pp. 328–343, 2010

  19. Skill training in multimodal virtual environments.

    PubMed

    Gopher, Daniel

    2012-01-01

    Multimodal, immersive, virtual reality (VR) techniques open new perspectives for perceptual-motor skill trainers. They also introduce new risks and dangers. This paper describes the benefits and pitfalls of multimodal training and the cognitive building blocks of a multimodal, VR training simulators.

  20. Natural Language Based Multimodal Interface for UAV Mission Planning

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette

    2017-01-01

    As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.

  1. The Impact Of Multimode Fiber Chromatic Dispersion On Data Communications

    NASA Astrophysics Data System (ADS)

    Hackert, Michael J.

    1990-01-01

    Capability for the lowest cost is the goal of contemporary communications managers. With all of the competitive pressures that modern businesses are experiencing these days, communications needs must be met with the most information carrying capacity for the lowest cost. Optical fiber communication systems meet these requirements while providing reliability, system integrity, and potential future upgradability. Consequently, optical fiber is finding numerous applications in addition to its traditional telephony plant. Fiber based systems are meeting these requirements in building networks and computer interconnects at a lower cost than copper based systems. A fiber type being chosen by industry to meet these needs in standard systems such as FDDI, is multimode fiber. Multimode fiber systems offer cost advantages over single-mode fiber through lower fiber connection costs. Also, system designers can gain savings by using low cost, high reliability, wide spectral width sources such as LEDs instead of lasers and by operating at higher bit rates than used for multimode systems in the past. However, in order to maximize the cost savings while ensuring the system will operate as intended, the chromatic dispersion of the fiber must be taken into account. This paper explains how to do that and shows how to calculate multimode chromatic dispersion for each of the standard fiber sizes (50 μm, 62.5 μm, 85 μm, and 100μm core diameter).

  2. Human-computer interface

    DOEpatents

    Anderson, Thomas G.

    2004-12-21

    The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

  3. Ethos in Fukushima and the ICRP dialogue seminars.

    PubMed

    Ando, R

    2016-12-01

    Ethos in Fukushima, a non-profit organisation, participated in 10 of the 12 International Commission on Radiological Protection (ICRP) dialogue seminars over the past 4 years. The slides and videos that were shown at the seminars are recorded on the Ethos in Fukushima website ( http://ethos-fukushima.blogspot.jp/p/icrp-dialogue.html ). I would like to introduce the activities of Ethos in Fukushima to date, and explain why the ICRP dialogue materials have come to be published on its website.

  4. On the Benefits of Multimodal Annotations for Vocabulary Uptake from Reading

    ERIC Educational Resources Information Center

    Boers, Frank; Warren, Paul; Grimshaw, Gina; Siyanova-Chanturia, Anna

    2017-01-01

    Several research articles published in the realm of Computer Assisted Language Learning (CALL) have reported evidence of the benefits of multimodal annotations, i.e. the provision of pictorial as well as verbal clarifications, for vocabulary uptake from reading. Almost invariably, these publications account for the observed benefits with reference…

  5. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  6. Perspectives on health policy dialogue: definition, perceived importance and coordination.

    PubMed

    Nabyonga-Orem, Juliet; Ousman, Kevin; Estrelli, Yolanda; Rene, Adzodo K M; Yakouba, Zina; Gebrikidane, Mesfin; Mamoud, Drave; Kwamie, Aku

    2016-07-18

    Countries in the World Health Organization African Region have witnessed an increase in global health initiatives in the recent past. Although these have provided opportunities for expanding coverage of health interventions; their poor alignment with the countries' priorities and weak coordination, are among the challenges that have affected their impact. A well-coordinated health policy dialogue provides an opportunity to address these challenges, but calls for common understanding among stakeholders of what policy dialogue entails. This paper seeks to assess stakeholders' understanding and perceived importance of health policy dialogue and of policy dialogue coordination. This was a cross-sectional descriptive study using qualitative methods. Interviews were conducted with 90 key informants from the national and sub-national levels in Lusophone Cabo Verde, Francophone Chad, Guinea and Togo, and Anglophone Liberia using an open-ended interview guide. The interviews were transcribed verbatim, coded and then put through inductive thematic content analysis using QRS software Version 10. There were variations in the definition of policy dialogue that were not necessarily linked to the linguistic leaning of respondents' countries or whether the dialogue took place at the national or sub-national level. The definitions were grouped into five categories based on whether they had an outcome, operational, process, forum or platform, or interactive and evidence-sharing orientation. The stakeholders highlighted multiple benefits of policy dialogue including ensuring stakeholder participation, improving stakeholder harmonisation and alignment, supporting implementation of health policies, fostering continued institutional learning, providing a guiding framework and facilitating stakeholder analysis. Policy dialogue offers the opportunity to improve stakeholder participation in policy development and promote aid effectiveness. However, conceptual clarity is needed to ensure

  7. Multimodal emotional state recognition using sequence-dependent deep hierarchical features.

    PubMed

    Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan

    2015-12-01

    Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  9. Practical multimodal care for cancer cachexia.

    PubMed

    Maddocks, Matthew; Hopkinson, Jane; Conibear, John; Reeves, Annie; Shaw, Clare; Fearon, Ken C H

    2016-12-01

    Cancer cachexia is common and reduces function, treatment tolerability and quality of life. Given its multifaceted pathophysiology a multimodal approach to cachexia management is advocated for, but can be difficult to realise in practice. We use a case-based approach to highlight practical approaches to the multimodal management of cachexia for patients across the cancer trajectory. Four cases with lung cancer spanning surgical resection, radical chemoradiotherapy, palliative chemotherapy and no anticancer treatment are presented. We propose multimodal care approaches that incorporate nutritional support, exercise, and anti-inflammatory agents, on a background of personalized oncology care and family-centred education. Collectively, the cases reveal that multimodal care is part of everyone's remit, often focuses on supported self-management, and demands buy-in from the patient and their family. Once operationalized, multimodal care approaches can be tested pragmatically, including alongside emerging pharmacological cachexia treatments. We demonstrate that multimodal care for cancer cachexia can be achieved using simple treatments and without a dedicated team of specialists. The sharing of advice between health professionals can help build collective confidence and expertise, moving towards a position in which every team member feels they can contribute towards multimodal care.

  10. U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly

    2008-04-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.

  11. The Practice of Dialogue in Critical Pedagogy

    ERIC Educational Resources Information Center

    Kaufmann, Jodi Jan

    2010-01-01

    This paper examines dialogue in the higher education classroom. Instigated by my teaching experiences and the paucity of empirical studies examining dialogue in the higher education classroom, I present a re-examination of data I collected in 1996 for an ethnographic study focusing on the experiences of the participants in an ethnic literature…

  12. [The extraction of truth: apropos of the Socratic dialogue].

    PubMed

    Van Rossem, Kristof; Bolten, Hans

    2002-01-01

    The socratic dialogue is a philosophical method that enables colleagues to investigate which judgements people have about their experiences and how these judgements can be based. In this article, the reader will learn more about the historical background, the organisation, the levels of dialogue, the role of the facilitator. We also pay attention to the results that a regular practise of socratic dialogue can have for professional dentists. The most important one is a growing sensitivity and lucidity in the daily social life with patients and colleagues. In the dialogue, this can be practiced by sharpening the moral perception of concrete details in the lived experience.

  13. Multimodal Career Education for Nursing Students.

    ERIC Educational Resources Information Center

    Southern, Stephen; Smith, Robert L.

    A multimodal career education model entitled BEST IDEA was field tested as an approach to the problem of retaining skilled nurses in the work force. Using multimodal assessment and intervention strategies derived from the multimodal behavior therapy of Arnold Lazarus, researchers developed an individualized career development assessment and…

  14. Locating the Semiotic Power of Multimodality

    ERIC Educational Resources Information Center

    Hull, Glynda A.; Nelson, Mark Evan

    2005-01-01

    This article reports research that attempts to characterize what is powerful about digital multimodal texts. Building from recent theoretical work on understanding the workings and implications of multimodal communication, the authors call for a continuing empirical investigation into the roles that digital multimodal texts play in real-world…

  15. Making IBM's Computer, Watson, Human

    ERIC Educational Resources Information Center

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, "Jeopardy," to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered…

  16. Simultaneous measurement of breathing rate and heart rate using a microbend multimode fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Chen, Zhihao; Lau, Doreen; Teo, Ju Teng; Ng, Soon Huat; Yang, Xiufeng; Kei, Pin Lin

    2014-05-01

    We propose and demonstrate the feasibility of using a highly sensitive microbend multimode fiber optic sensor for simultaneous measurement of breathing rate (BR) and heart rate (HR). The sensing system consists of a transceiver, microbend multimode fiber, and a computer. The transceiver is comprised of an optical transmitter, an optical receiver, and circuits for data communication with the computer via Bluetooth. Comparative experiments conducted between the sensor and predicate commercial physiologic devices showed an accuracy of ±2 bpm for both BR and HR measurement. Our preliminary study of simultaneous measurement of BR and HR in a clinical trial conducted on 11 healthy subjects during magnetic resonance imaging (MRI) also showed very good agreement with measurements obtained from conventional MR-compatible devices.

  17. Verbal redundancy aids memory for filmed entertainment dialogue.

    PubMed

    Hinkin, Michael P; Harris, Richard J; Miranda, Andrew T

    2014-01-01

    Three studies investigated the effects of presentation modality and redundancy of verbal content on recognition memory for entertainment film dialogue. U.S. participants watched two brief movie clips and afterward answered multiple-choice questions about information from the dialogue. Experiment 1 compared recognition memory for spoken dialogue in the native language (English) with subtitles in English, French, or no subtitles. Experiment 2 compared memory for material in English subtitles with spoken dialogue in English, French, or no sound. Experiment 3 examined three control conditions with no spoken or captioned material in the native language. All participants watched the same video clips and answered the same questions. Performance was consistently good whenever English dialogue appeared in either the subtitles or sound, and best of all when it appeared in both, supporting the facilitation of verbal redundancy. Performance was also better when English was only in the subtitles than when it was only spoken. Unexpectedly, sound or subtitles in an unfamiliar language (French) modestly improved performance, as long as there was also a familiar channel. Results extend multimedia research on verbal redundancy for expository material to verbal information in entertainment media.

  18. Tinnitus Multimodal Imaging

    DTIC Science & Technology

    2015-10-01

    AWARD NUMBER: W81XWH-13-1-0494 TITLE: Tinnitus Multimodal Imaging PRINCIPAL INVESTIGATOR: Steven Wan Cheung CONTRACTING ORGANIZATION...NUMBER W81XWH-13-1-0494 Tinnitus Multimodal Imaging 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Steven W. Cheung...13. SUPPLEMENTARY NOTES 14. ABSTRACT Tinnitus is a common auditory perceptual disorder whose neural substrates are under intense debate. This project

  19. The Use of the Dialogue Concepts from the Arsenal of the Norwegian Dialogue Pedagogy in the Time of Postmodernism

    ERIC Educational Resources Information Center

    Gradovski, Mikhail

    2012-01-01

    Inspired by the views by the American educationalist Henry Giroux on the role teachers and educationalists should be playing in the time of postmodernism and by Abraham Maslow's concept of biological idiosyncrasy, the author discusses how the concepts of the dialogues created by the representatives of Norwegian Dialogue Pedagogy, Hans Skjervheim,…

  20. Fruit Carts: A Domain and Corpus for Research in Dialogue Systems and Psycholinguistics.

    PubMed

    Aist, Gregory; Campana, Ellen; Allen, James; Swift, Mary; Tanenhaus, Michael K

    2012-09-01

    We describe a novel domain, Fruit Carts, aimed at eliciting human language production for the twin purposes of (a) dialogue system research and development and (b) psycholinguistic research. Fruit Carts contains five tasks: choosing a cart, placing it on a map, painting the cart, rotating the cart, and filling the cart with fruit. Fruit Carts has been used for research in psycholinguistics and in dialogue systems. Based on these experiences, we discuss how well the Fruit Carts domain meets four desired features: unscripted, context-constrained, controllable difficulty, and separability into semi-independent subdialogues. We describe the domain in sufficient detail to allow others to replicate it; researchers interested in using the corpora themselves are encouraged to contact the authors directly.

  1. Education as Dialogue

    ERIC Educational Resources Information Center

    Jourard, Sidney M.

    1978-01-01

    In this discussion, the author's last public presentation before his death in 1974, is a dedication to dialogue as the essence of education. In the midst of modern consciousness-altering technology, he valued authentic conservation more powerful than LSD, meditation, and all the rest. (Editor/RK)

  2. Interfaith Dialogue at Peace Museums in Kenya

    ERIC Educational Resources Information Center

    Gachanga, Timothy; Mutisya, Munuve

    2015-01-01

    This paper makes a case for further studies on the contribution of peace museums to interfaith dialogue debate. Based on our experiences as museum curators, teachers and peace researchers and a review of published materials, we argue that there is a lacuna in the study on the contribution of peace museums to the interfaith dialogue debate. The…

  3. Interfaith Dialogue as a Means for Transformational Conversations

    ERIC Educational Resources Information Center

    Krebs, Stephanie Russell

    2015-01-01

    This article reports findings, inspired by the researcher's personal, transformational experience, on students' responses to an interfaith dialogue at an Interfaith Youth Core Interfaith Leadership Institute. Results demonstrated that several factors characterize interfaith dialogue: the environment, individual relationships fostered through…

  4. Interactive multi-mode blade impact analysis

    NASA Technical Reports Server (NTRS)

    Alexander, A.; Cornell, R. W.

    1978-01-01

    The theoretical methodology used in developing an analysis for the response of turbine engine fan blades subjected to soft-body (bird) impacts is reported, and the computer program developed using this methodology as its basis is described. This computer program is an outgrowth of two programs that were previously developed for the purpose of studying problems of a similar nature (a 3-mode beam impact analysis and a multi-mode beam impact analysis). The present program utilizes an improved missile model that is interactively coupled with blade motion which is more consistent with actual observations. It takes into account local deformation at the impact area, blade camber effects, and the spreading of the impacted missile mass on the blade surface. In addition, it accommodates plate-type mode shapes. The analysis capability in this computer program represents a significant improvement in the development of the methodology for evaluating potential fan blade materials and designs with regard to foreign object impact resistance.

  5. Computer modeling of human decision making

    NASA Technical Reports Server (NTRS)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  6. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  7. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  8. Multi-Modality Phantom Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less

  9. Multimodal sequence learning.

    PubMed

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Multimodality instrument for tissue characterization

    NASA Technical Reports Server (NTRS)

    Mah, Robert W. (Inventor); Andrews, Russell J. (Inventor)

    2004-01-01

    A system with multimodality instrument for tissue identification includes a computer-controlled motor driven heuristic probe with a multisensory tip. For neurosurgical applications, the instrument is mounted on a stereotactic frame for the probe to penetrate the brain in a precisely controlled fashion. The resistance of the brain tissue being penetrated is continually monitored by a miniaturized strain gauge attached to the probe tip. Other modality sensors may be mounted near the probe tip to provide real-time tissue characterizations and the ability to detect the proximity of blood vessels, thus eliminating errors normally associated with registration of pre-operative scans, tissue swelling, elastic tissue deformation, human judgement, etc., and rendering surgical procedures safer, more accurate, and efficient. A neural network program adaptively learns the information on resistance and other characteristic features of normal brain tissue during the surgery and provides near real-time modeling. A fuzzy logic interface to the neural network program incorporates expert medical knowledge in the learning process. Identification of abnormal brain tissue is determined by the detection of change and comparison with previously learned models of abnormal brain tissues. The operation of the instrument is controlled through a user friendly graphical interface. Patient data is presented in a 3D stereographics display. Acoustic feedback of selected information may optionally be provided. Upon detection of the close proximity to blood vessels or abnormal brain tissue, the computer-controlled motor immediately stops probe penetration. The use of this system will make surgical procedures safer, more accurate, and more efficient. Other applications of this system include the detection, prognosis and treatment of breast cancer, prostate cancer, spinal diseases, and use in general exploratory surgery.

  11. Russian Supplementary Dialogues.

    ERIC Educational Resources Information Center

    Peace Corps, Ashgabat (Turkmenistan).

    This manual is designed for the Russian language training of Peace Corps volunteers serving in Turkmenistan, and focuses on daily communication skills needed in that context. It consists of nine topical lessons, each containing several brief dialogues targeting specific language competencies, and exercises. Text is entirely in Russian, except for…

  12. Patient participation as dialogue: setting research agendas

    PubMed Central

    Abma, Tineke A.; Broerse, Jacqueline E. W.

    2010-01-01

    Abstract Background  Collaboration with patients in healthcare and medical research is an emerging development. We aimed to develop a methodology for health research agenda setting processes grounded in the notion of participation as dialogue. Methods  We conducted seven case studies between 2003 and 2007 to develop and validate a Dialogue Model for patient participation in health research agenda setting. The case studies related to spinal cord injury, neuromuscular diseases, renal failure, asthma/chronic obstructive pulmonary disease, burns, diabetes and intellectual disabilities. Results  The Dialogue Model is grounded in participatory and interactive approaches and has been adjusted on the basis of pilot work. It has six phases: exploration; consultation; prioritization; integration; programming; and implementation. These phases are discussed and illustrated with a case description of research agenda setting relating to burns. Conclusions  The dialogue model appeared relevant and feasible to structure the process of collaboration between stakeholders in several research agenda setting processes. The phase of consultation enables patients to develop their own voice and agenda, and prepares them for the broader collaboration with other stakeholder groups. Challenges include the stimulation of more permanent changes in research, and institutional transitions. PMID:20536537

  13. Democracy in Education through Community-Based Policy Dialogues

    ERIC Educational Resources Information Center

    Winton, Sue

    2010-01-01

    In 2008, People for Education, an Ontario-based parent-led organization, hosted eight policy dialogues with citizens about possibilities for the province's public schools. Policy dialogues are conversations about policy issues, ideas, processes, and outcomes where participants share their knowledge, perspectives, and experiences. In small groups…

  14. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  15. A simultaneous multimodal imaging system for tissue functional parameters

    NASA Astrophysics Data System (ADS)

    Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.

  16. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  17. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  18. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  19. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  20. Biological characterization of preclinical Bioluminescent Osteosarcoma Orthotopic Mouse (BOOM) model: A multi-modality approach

    PubMed Central

    Garimella, Rama; Eskew, Jeff; Bhamidi, Priyanka; Vielhauer, George; Hong, Yan; Anderson, H. Clarke; Tawfik, Ossama; Rowe, Peter

    2013-01-01

    Osteosarcoma (OS) is a bone malignancy that affects children and adolescents. It is a highly aggressive tumor and typically metastasizes to lungs. Despite aggressive chemotherapy and surgical treatments, the current 5 year survival rate is 60–70%. Clinically relevant models are needed to understand OS pathobiology, metastatic progression from bones to lungs, and ultimately, to develop more efficacious treatment strategies and improve survival rates in OS patients with metastasis. The main goal of this study was to develop and characterize an in vivo OS model that will allow non-invasive tracking of tumor progression in real time, and aid in studying OS pathobiology, and screening of potential therapeutic agents against OS. In this study, we have used a multi-modality approach using bioluminescent imaging, electron microscopy, micro-computed tomography, and histopathology to develop and characterize a preclinical Bioluminescent Osteosarcoma Orthotopic Mouse (BOOM) model, using 143B human OS cell line. The results of this study clearly demonstrate that the BOOM model represents the clinical disease as evidenced by a spectrum of changes associated with tumor establishment, progression and metastasis, and detection of known OS biomarkers in the primary and metastatic tumor tissue. Key novel findings of this study include: (a) multimodality approach for extensive characterization of the BOOM model using 143B human OS cell line; (b) evidence of renal metastasis in OS orthotopic model using 143B cells; (c) evidence of Runx2 expression in the metastatic lung tissue; and (d) evidence of the presence of extracellular membrane vesicles and myofibroblasts in the BOOM model. PMID:25688332

  1. Vestibular system: the many facets of a multimodal sense.

    PubMed

    Angelaki, Dora E; Cullen, Kathleen E

    2008-01-01

    Elegant sensory structures in the inner ear have evolved to measure head motion. These vestibular receptors consist of highly conserved semicircular canals and otolith organs. Unlike other senses, vestibular information in the central nervous system becomes immediately multisensory and multimodal. There is no overt, readily recognizable conscious sensation from these organs, yet vestibular signals contribute to a surprising range of brain functions, from the most automatic reflexes to spatial perception and motor coordination. Critical to these diverse, multimodal functions are multiple computationally intriguing levels of processing. For example, the need for multisensory integration necessitates vestibular representations in multiple reference frames. Proprioceptive-vestibular interactions, coupled with corollary discharge of a motor plan, allow the brain to distinguish actively generated from passive head movements. Finally, nonlinear interactions between otolith and canal signals allow the vestibular system to function as an inertial sensor and contribute critically to both navigation and spatial orientation.

  2. The integration of emotional and symbolic components in multimodal communication

    PubMed Central

    Mehu, Marc

    2015-01-01

    Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals. PMID:26217280

  3. Reflections on the researcher-participant relationship and the ethics of dialogue.

    PubMed

    Yassour-Borochowitz, Dalit

    2004-01-01

    Research concerned with human beings is always an interference of some kind, thus posing ethical dilemmas that need justification of procedures and methodologies. It is especially true in social work when facing mostly sensitive populations and sensitive issues. In the process of conducting a research on the emotional life histories of Israeli men who batter their partners, some serious ethical questions were evoked such as (a) Did the participants really give their consent? (b) What are the limits of the researcher-participants relationship and who decides them? (c) For whom is the study beneficial? and (d) To what degree did the methodology fit with the participants? In this article, I discuss the Socratic idea of truth revealed through dialogue and the idea of reciprocity that was developed in Buber's (1949) ethics of dialogue and Habermas' (1990) communicative ethics. The 3 essential conclusions drawn from the ethical questions raised and the discussion of the thinkers that are mentioned previously are (a) dialogical methodology is ethically justified; (b) dynamic interactions give a more holistic perspective of the human nature, thus enriching the field; and (c) through dialogical methodology both researcher and participant profit from growth of knowledge, which is a key for empowerment and change.

  4. Generation of a suite of 3D computer-generated breast phantoms from a limited set of human subject data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Christina M. L.; Palmeri, Mark L.; Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina 27710

    2013-04-15

    Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphingmore » technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the 'base' and 'target' for morphing. Several combinations of transformations were applied to morph between the 'base' and 'target' datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing three

  5. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness.

    PubMed

    Calhoun, Vince D; Sui, Jing

    2016-05-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.

  6. Human performance models for computer-aided engineering

    NASA Technical Reports Server (NTRS)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  7. Parallel approach to incorporating face image information into dialogue processing

    NASA Astrophysics Data System (ADS)

    Ren, Fuji

    2000-10-01

    There are many kinds of so-called irregular expressions in natural dialogues. Even if the content of a conversation is the same in words, different meanings can be interpreted by a person's feeling or face expression. To have a good understanding of dialogues, it is required in a flexible dialogue processing system to infer the speaker's view properly. However, it is difficult to obtain the meaning of the speaker's sentences in various scenes using traditional methods. In this paper, a new approach for dialogue processing that incorporates information from the speaker's face is presented. We first divide conversation statements into several simple tasks. Second, we process each simple task using an independent processor. Third, we employ some speaker's face information to estimate the view of the speakers to solve ambiguities in dialogues. The approach presented in this paper can work efficiently, because independent processors run in parallel, writing partial results to a shared memory, incorporating partial results at appropriate points, and complementing each other. A parallel algorithm and a method for employing the face information in a dialogue machine translation will be discussed, and some results will be included in this paper.

  8. Dialogue as Moral Paradigm: Paths toward Intercultural Transformation

    ERIC Educational Resources Information Center

    Keller, J. Gregory

    2011-01-01

    The Council of Europe's 2008 "White Paper on Intercultural Dialogue: 'living together as equals in dignity'" points to the need for shared values upon which intercultural dialogue might rest. In order, however, to overcome the monologic separateness that threatens community, we must educate ourselves to recognize the dialogism of our…

  9. Including Psychology in Inclusive Pedagogy: Enriching the Dialogue?

    ERIC Educational Resources Information Center

    Kershner, Ruth

    2016-01-01

    Inclusive education is a complex field of study and practice that requires good communication and dialogue between all involved. Psychology has to some extent been marginalised in these educational dialogues. This is, in part, due to psychology's perceived heritage in the standardised testing that has been used to support the educational…

  10. DISCUSS: Toward a Domain Independent Representation of Dialogue

    ERIC Educational Resources Information Center

    Becker, Lee

    2012-01-01

    While many studies have demonstrated that conversational tutoring systems have a positive effect on learning, the amount of manual effort required to author, design, and tune dialogue behaviors remains a major barrier to widespread deployment and adoption of these systems. Such dialogue systems must not only understand student speech, but must…

  11. Modeling Human-Computer Decision Making with Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Coovert, Michael D.; And Others

    Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…

  12. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  13. Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*

    PubMed Central

    Jian, Bing; Vemuri, Baba C.; Marroquin, José L.

    2008-01-01

    Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721

  14. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  15. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  16. Promoting Children's Healthy Social-Emotional Growth: Dialogue Journal

    ERIC Educational Resources Information Center

    Konishi, Chiaki; Park, Sol

    2017-01-01

    Dialogue journals are a form of writing in which a student and a teacher carry on a conversation over time. This paper addresses the benefits of using dialogue journals for promoting a positive social-emotional learning (SEL) environment for children in school settings. Educators and researchers have increasingly acknowledged the importance of SEL…

  17. Dialogue on Modernity and Modern Education in Dispute

    ERIC Educational Resources Information Center

    Baker, Michael; Peters, Michael A.

    2012-01-01

    This is a dialogue or conversation between Michael Baker (MB) and Michael A. Peters (MP) on the concept of modernity and its significance for educational theory. The dialogue took place originally as a conversation about a symposium on modernity held at the American Educational Studies Association meeting 2010. It was later developed for…

  18. Multimodal Discourse Analysis of the Movie "Argo"

    ERIC Educational Resources Information Center

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  19. Dialogue on safety

    Treesearch

    Anne Black; James Saveland; Dave Thomas

    2011-01-01

    There are many reasons to hold a conversation, among them: information download, information exchange, selection of a course of action, consensus-building, and exploration. Dialogue is a particular type of conversation that seeks to explore a subject in order to generate new ideas and insights. It is based on the recognitions that (1) the critical issues of today are...

  20. Improved harmonisation from policy dialogue? Realist perspectives from Guinea and Chad.

    PubMed

    Kwamie, Aku; Nabyonga-Orem, Juliet

    2016-07-18

    Harmonisation is a key principle of the Paris Declaration. The Universal Health Coverage (UHC) Partnership, an initiative of the European Union, the Government of Luxembourg and the World Health Organization, supported health policy dialogues between 2012 and 2015 in identified countries in the WHO African Region. The UHC Partnership has amongst its key objectives to strengthen national health policy development. In Guinea and Chad, policy dialogue focused on elaborating the national health plan and other key documents. This study is an analytical reflection inspired by realist evaluative approaches to understand whether policy dialogue led to improved harmonisation amongst health actors in Guinea and Chad, and if so, how and why. Interviews were conducted in Guinea and Chad with key informants at the national and sub-national government levels, civil society, and development partners. A review of relevant policy documents and reports was added to data collection to construct a full picture of the policy dialogue process. Context-mechanism-outcome configurations were used as the realist framework to guide the analysis on how participants' understanding of what policy dialogue was and the way the policy dialogue process unfolded led to improved harmonisation. Improved harmonisation as a result of policy dialogue was perceived to be stronger in Guinea than in Chad. While in both countries the participants held a shared view of what policy dialogue was and what it could achieve, and both policy dialogue processes were considered to be well implemented (i.e., well-facilitated, evidence-based, participatory, and consisted of recurring meetings and activities), certain contextual factors in Chad tempered the view of harmonisation as having improved. These were the pre-existence of dialogic policy processes that had exposed the actors to the potential that policy dialogue could have; a focus on elaborating provincial level strategies, which gave the sense that the process

  1. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  2. Designing Multimodal Texts about the Middle Ages

    ERIC Educational Resources Information Center

    Insulander, Eva; Lindstrand, Fredrik; Selander, Staffan

    2017-01-01

    Multimedial and multimodal communication arouse interest in many fields of research today. By contrast, little attention is paid to multimodality in relation to designs for learning, especially in relation to representations of knowledge on an aggregated level. By analyzing three multimodal texts about the Middle Ages, including a textbook, a film…

  3. Pesticide Program Dialogue Committee (PPDC)

    EPA Pesticide Factsheets

    The Pesticide Program Dialogue Committee, a permanent, broadly representative advisory committee, meets with EPA on a regular basis to discuss pesticide regulatory, policy, and program implementation issues.

  4. Reflective scientific sense-making dialogue in two languages: The science in the dialogue and the dialogue in the science

    NASA Astrophysics Data System (ADS)

    Ash, Doris

    2004-11-01

    In this paper I focus on the transition from everyday to scientific ways of reasoning, and on the intertwined roles of meaning-making dialogue and science content as they contribute to scientific literacy. I refer to views of science, and how scientific understanding is advanced dialogically, by Hurd (Science Education, 1998, 82, 402-416), Brown (The Journal of Learning Sciences, 1992, 2(2), 141-178), Bruner (Acts of Meaning, Cambridge, MA: Harvard University Press, 1990), Roth (In J. Brophy (Ed.), Social Constructivist Teaching: Affordances and Constraints (Advances in Research on Teaching Series, Vol. 9), New York: Elsevier/JAI, 2003), and Wells (Dialogic Inquiry: Towards a Sociocultural Practice and Theory of Education, New York: Cambridge University Press, 1999). I argue that family collaborative dialogues in nonschool settings can be the foundations for scientific ways of thinking. I focus on the particular reflective family dialogues at the Monterey Bay Aquarium, when family members remembered and synthesized essential biological themes, centering on adaptation, from one visit to the next, in both Spanish and English. My approach is informed by sociocultural theory, with emphasis on the negotiations of meaning in the zone of proximal development (Vygotsky, 1978), as learners engage in joint productive activity (Tharp & Gallimore, Rousing Minds to Life: Teaching, Learning and Schooling in Social Context, New York: Cambridge University Press, 1988). Over the past decades, researchers have discovered that observing social activity, conversation, and meaning-making in informal settings (Crowley & Callanan, 1997; Guberman, 2002; Rogoff, 2001; Vasquez, Pease-Alvarez, & Shannon, Pushing Boundaries: Language and Culture in a Mexicano Community, New York: Cambridge University Press, 1994) has much to teach us regarding learning in general. To date there has been little research with Spanish-speaking families in informal learning settings and virtually none that

  5. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    PubMed

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  6. Reflective Scientific Sense-Making Dialogue in Two Languages: The Science in the Dialogue and the Dialogue in the Science

    ERIC Educational Resources Information Center

    Ash, Doris

    2004-01-01

    In this paper I focus on the transition from everyday to scientific ways of reasoning, and on the intertwined roles of meaning-making dialogue and science content as they contribute to scientific literacy. I refer to views of science, and how scientific understanding is advanced dialogically, by Hurd (Science Education, 1998, 82, 402-416), Brown…

  7. Multimode fiber devices with single-mode performance

    NASA Astrophysics Data System (ADS)

    Leon-Saval, S. G.; Birks, T. A.; Bland-Hawthorn, J.; Englund, M.

    2005-10-01

    A taper transition can couple light between a multimode fiber and several single-mode fibers. If the number of single-mode fibers matches the number of spatial modes in the multimode fiber, the transition can have low loss in both directions. This enables the high performance of single-mode fiber devices to be attained in multimode fibers. We report an experimental proof of concept by using photonic crystal fiber techniques to make the transitions, demonstrating a multimode fiber filter with the transmission spectrum of a single-mode fiber grating.

  8. Generative Dialogue as a Transformative Learning Practice in Adult and Higher Education Settings

    ERIC Educational Resources Information Center

    Gunnlaugson, Olen

    2006-01-01

    This article explores Scharmer's account of generative dialogue, which followed from Bohmian dialogue in the 1980s and Isaacs' research with the MIT Dialogue Project in the early 1990s. It presents the author's view that generative dialogue offers a useful theoretical framework and effective means for facilitating transformative learning processes…

  9. Single-cell multimodal profiling reveals cellular epigenetic heterogeneity.

    PubMed

    Cheow, Lih Feng; Courtois, Elise T; Tan, Yuliana; Viswanathan, Ramya; Xing, Qiaorui; Tan, Rui Zhen; Tan, Daniel S W; Robson, Paul; Loh, Yuin-Han; Quake, Stephen R; Burkholder, William F

    2016-10-01

    Sample heterogeneity often masks DNA methylation signatures in subpopulations of cells. Here, we present a method to genotype single cells while simultaneously interrogating gene expression and DNA methylation at multiple loci. We used this targeted multimodal approach, implemented on an automated, high-throughput microfluidic platform, to assess primary lung adenocarcinomas and human fibroblasts undergoing reprogramming by profiling epigenetic variation among cell types identified through genotyping and transcriptional analysis.

  10. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  11. Microscopy with multimode fibers

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Papadopoulos, Ioannis; Farahi, Salma; Psaltis, Demetri

    2013-04-01

    Microscopes are usually thought of comprising imaging elements such as objectives and eye-piece lenses. A different type of microscope, used for endoscopy, consists of waveguiding elements such as fiber bundles, where each fiber in the bundle transports the light corresponding to one pixel in the image. Recently a new type of microscope has emerged that exploits the large number of propagating modes in a single multimode fiber. We have successfully produced fluorescence images of neural cells with sub-micrometer resolution via a 200 micrometer core multimode fiber. The method for achieving imaging consists of using digital phase conjugation to reproduce a focal spot at the tip of the multimode fiber. The image is formed by scanning the focal spot digitally and collecting the fluorescence point by point.

  12. Multimodal imaging of ischemic wounds

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  13. Exploring Creative Thinking in Graphically Mediated Synchronous Dialogues

    ERIC Educational Resources Information Center

    Wegerif, Rupert; McLaren, Bruce M.; Chamrada, Marian; Scheuer, Oliver; Mansour, Nasser; Miksatko, Jan; Williams, Mriga

    2010-01-01

    This paper reports on an aspect of the EC funded Argunaut project which researched and developed awareness tools for moderators of online dialogues. In this study we report on an investigation into the nature of creative thinking in online dialogues and whether or not this creative thinking can be coded for and recognized automatically such that…

  14. Ethics Responsibility Dialogue the Meaning of Dialogue in Lévinas's Philosophy

    ERIC Educational Resources Information Center

    Ben-Pazi, Hanoch

    2016-01-01

    This article examines the concept of dialogue in the philosophy of Emmanuel Lévinas, with a focus on the context of education. Its aim is to create a conversation between the Lévinasian theory and the theories of other philosophers, especially Martin Buber, in an effort to highlight the ethical significance that Lévinas assigns to the act of…

  15. Tutorial dialogues and gist explanations of genetic breast cancer risk.

    PubMed

    Widmer, Colin L; Wolfe, Christopher R; Reyna, Valerie F; Cedillos-Whynott, Elizabeth M; Brust-Renck, Priscila G; Weil, Audrey M

    2015-09-01

    The intelligent tutoring system (ITS) BRCA Gist is a Web-based tutor developed using the Shareable Knowledge Objects (SKO) platform that uses latent semantic analysis to engage women in natural-language dialogues to teach about breast cancer risk. BRCA Gist appears to be the first ITS designed to assist patients' health decision making. Two studies provide fine-grained analyses of the verbal interactions between BRCA Gist and women responding to five questions pertaining to breast cancer and genetic risk. We examined how "gist explanations" generated by participants during natural-language dialogues related to outcomes. Using reliable rubrics, scripts of the participants' verbal interactions with BRCA Gist were rated for content and for the appropriateness of the tutor's responses. Human researchers' scores for the content covered by the participants were strongly correlated with the coverage scores generated by BRCA Gist, indicating that BRCA Gist accurately assesses the extent to which people respond appropriately. In Study 1, participants' performance during the dialogues was consistently associated with learning outcomes about breast cancer risk. Study 2 was a field study with a more diverse population. Participants with an undergraduate degree or less education who were randomly assigned to BRCA Gist scored higher on tests of knowledge than those assigned to the National Cancer Institute website or than a control group. We replicated findings that the more expected content that participants included in their gist explanations, the better they performed on outcome measures. As fuzzy-trace theory suggests, encouraging people to develop and elaborate upon gist explanations appears to improve learning, comprehension, and decision making.

  16. Multimodal fusion of brain imaging data: A key to finding the missing link(s) in complex mental illness

    PubMed Central

    Calhoun, Vince D; Sui, Jing

    2016-01-01

    It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565

  17. Multimodal Hip Hop Productions as Media Literacies

    ERIC Educational Resources Information Center

    Turner, K. C. Nat

    2012-01-01

    This study draws on ethnographic data from a year-long multimodal media production (MMP) course and the experience of an African American female adolescent who used the production of multimodal Hip Hop texts to express her creativity and growing socially conscious view of the world. The study demonstrates how students made meaning multimodally and…

  18. Human/computer control of undersea teleoperators

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.

    1978-01-01

    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.

  19. Russian Basic Course: Dialogue Cartoon Guides, Lessons 1-83.

    ERIC Educational Resources Information Center

    Defense Language Inst., Washington, DC.

    This booklet of cartoon guides contains 83 units of instructional materials prepared by the Defense Language Insitute for use in an intensive, conversational, Russian course. Included are cartoon guides to dialogues and dialogue recombinations which focus on social concerns and military matters. (RL)

  20. Multimodal interactions in typically and atypically developing children: natural versus artificial environments.

    PubMed

    Giannopulu, Irini

    2013-11-01

    This review addresses the central role played by multimodal interactions in neurocognitive development. We first analyzed our studies of multimodal verbal and nonverbal cognition and emotional interactions within neuronal, that is, natural environments in typically developing children. We then tried to relate them to the topic of creating artificial environments using mobile toy robots to neurorehabilitate severely autistic children. By doing so, both neural/natural and artificial environments are considered as the basis of neuronal organization and reorganization. The common thread underlying the thinking behind this approach revolves around the brain's intrinsic properties: neuroplasticity and the fact that the brain is neurodynamic. In our approach, neural organization and reorganization using natural or artificial environments aspires to bring computational perspectives into cognitive developmental neuroscience.

  1. A psychiatric dialogue on the mind-body problem.

    PubMed

    Kendler, K S

    2001-07-01

    Of all the human professions, psychiatry is most centrally concerned with the relationship of mind and brain. In many clinical interactions, psychiatrists need to consider both subjective mental experiences and objective aspects of brain function. This article attempts to summarize, in the form of a dialogue between a philosophically informed attending psychiatrist and three residents, the major philosophical positions on the mind-body problem. The positions reviewed include the following: substance dualism, property dualism, type identity, token identity, functionalism, eliminative materialism, and explanatory dualism. This essay seeks to provide a brief user-friendly introduction, from a psychiatric perspective, to current thinking about the mind-body problem.

  2. Mothers' multimodal information processing is modulated by multimodal interactions with their infants.

    PubMed

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-10-17

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts.

  3. Family-initiated dialogue about medications during family-centered rounds.

    PubMed

    Benjamin, Jessica M; Cox, Elizabeth D; Trapskin, Philip J; Rajamanickam, Victoria P; Jorgenson, Roderick C; Weber, Holly L; Pearson, Rachel E; Carayon, Pascale; Lubcke, Nikki L

    2015-01-01

    Experts suggest family engagement in care can improve safety for hospitalized children. Family-centered rounds (FCRs) can offer families the opportunity to participate in error recovery related to children's medications. The objective of this study was to describe family-initiated dialogue about medications and health care team responses to this dialogue during FCR to understand the potential for FCR to foster safe medication use. FCR were video-recorded daily for 150 hospitalized children. Coders sorted family-initiated medication dialogue into mutually exclusive categories, reflecting place of administration, therapeutic class, topic, and health care team responses. Health care team responses were coded to reflect intent, actions taken by the team, and appropriateness of any changes. Eighty-three (55%) of the 150 families raised 318 medication topics during 347 FCR. Most family-initiated dialogue focused on inpatient medications (65%), with home medications comprising 35%. Anti-infectives (31%), analgesics (14%), and corticosteroids (11%) were the most commonly discussed medications. The most common medication topics raised by families were scheduling (24%) and adverse drug reactions (11%). Although most health care team responses were provision of information (74%), appropriate changes to the child's medications occurred in response to 8% of family-initiated dialogue, with most changes preventing or addressing adverse drug reactions or scheduling issues. Most families initiated dialogue regarding medications during FCRs, including both inpatient and home medications. They raised topics that altered treatment and were important for medication safety, adherence, and satisfaction. Study findings suggest specific medication topics that health care teams can anticipate addressing during FCR. Copyright © 2015 by the American Academy of Pediatrics.

  4. Unraveling Students' Interaction around a Tangible Interface Using Multimodal Learning Analytics

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Blikstein, Paulo

    2015-01-01

    In this paper, we describe multimodal learning analytics (MMLA) techniques to analyze data collected around an interactive learning environment. In a previous study (Schneider & Blikstein, submitted), we designed and evaluated a Tangible User Interface (TUI) where dyads of students were asked to learn about the human hearing system by…

  5. Learning through Ethnographic Dialogues

    ERIC Educational Resources Information Center

    Landis, David; Kalieva, Rysaldy; Abitova, Sanim; Izmukhanbetova, Sophia; Musaeva, Zhanbota

    2006-01-01

    This article describes ways that conversations constituted ethnographic research for students and teachers in Kazakhstan. Through dialogues with local community members, students worked as researchers to develop knowledge about cultural patterns and social life. Ethnographic research and writing provided valuable language and research experiences…

  6. Multimodality imaging of adult gastric emergencies: A pictorial review

    PubMed Central

    Sunnapwar, Abhijit; Ojili, Vijayanadh; Katre, Rashmi; Shah, Hardik; Nagar, Arpit

    2017-01-01

    Acute gastric emergencies require urgent surgical or nonsurgical intervention because they are associated with high morbidity and mortality. Imaging plays an important role in diagnosis since the clinical symptoms are often nonspecific and radiologist may be the first one to suggest a diagnosis as the imaging findings are often characteristic. The purpose of this article is to provide a comprehensive review of multimodality imaging (plain radiograph, fluoroscopy, and computed tomography) of various life threatening gastric emergencies. PMID:28515579

  7. Promoting the experimental dialogue between working memory and chunking: Behavioral data and simulation.

    PubMed

    Portrat, Sophie; Guida, Alessandro; Phénix, Thierry; Lemaire, Benoît

    2016-04-01

    Working memory (WM) is a cognitive system allowing short-term maintenance and processing of information. Maintaining information in WM consists, classically, in rehearsing or refreshing it. Chunking could also be considered as a maintenance mechanism. However, in the literature, it is more often used to explain performance than explicitly investigated within WM paradigms. Hence, the aim of the present paper was (1) to strengthen the experimental dialogue between WM and chunking, by studying the effect of acronyms in a computer-paced complex span task paradigm and (2) to formalize explicitly this dialogue within a computational model. Young adults performed a WM complex span task in which they had to maintain series of 7 letters for further recall while performing a concurrent location judgment task. The series to be remembered were either random strings of letters or strings containing a 3-letter acronym that appeared in position 1, 3, or 5 in the series. Together, the data and simulations provide a better understanding of the maintenance mechanisms taking place in WM and its interplay with long-term memory. Indeed, the behavioral WM performance lends evidence to the functional characteristics of chunking that seems to be, especially in a WM complex span task, an attentional time-based mechanism that certainly enhances WM performance but also competes with other processes at hand in WM. Computational simulations support and delineate such a conception by showing that searching for a chunk in long-term memory involves attentionally demanding subprocesses that essentially take place during the encoding phases of the task.

  8. The role of power in health policy dialogues: lessons from African countries.

    PubMed

    Mwisongo, Aziza; Nabyonga-Orem, Juliet; Yao, Theodore; Dovlo, Delanyo

    2016-07-18

    Policy-making is a dynamic process involving the interplay of various factors. Power and its role are some of its core components. Though power exerts a profound role in policy-making, empirical evidence suggests that health policy analysis has paid only limited attention to the role of power, particularly in policy dialogues. This exploratory study, which used qualitative methods, had the main aim of learning about and understanding policy dialogues in five African countries and how power influences such processes. Data were collected using key informant interviews. An interview guide was developed with standardised questions and probes on the policy dialogues in each country. This paper utilises these data plus document review to understand how power was manifested during the policy dialogues. Reference is made to the Arts and Tatenhove conceptual framework on power dimensions to understand how power featured during the policy dialogues in African health contexts. Arts and Tatenhove conceptualise power in policy-making in relational, dispositional and structural layers. Our study found that power was applied positively during the dialogues to prioritise agendas, fast-track processes, reorganise positions, focus attention on certain items and foster involvement of the community. Power was applied negatively during the dialogues, for example when position was used to control and shape dialogues, which limited innovation, and when knowledge power was used to influence decisions and the direction of the dialogues. Transitive power was used to challenge the government to think of implementation issues often forgotten during policy-making processes. Dispositional power was the most complex form of power expressed both overtly and covertly. Structural power was manifested socially, culturally, politically, legally and economically. This study shows that we need to be cognisant of the role of power during policy dialogues and put mechanisms in place to manage its

  9. Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin

    NASA Astrophysics Data System (ADS)

    Lai, Zhenhua

    The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system

  10. Multimodal therapy of word retrieval disorder due to phonological encoding dysfunction.

    PubMed

    Weill-Chounlamountry, Agnès; Capelle, Nathalie; Tessier, Catherine; Pradat-Diehl, Pascale

    2013-01-01

    To determine whether phonological multimodal therapy can improve naming and communication in a patient showing a lexical phonological naming disorder. This study employed oral and written learning tasks, using an error reduction procedure. A single-case design computer-assisted treatment was used with a 52 year-old woman with fluent aphasia consecutive to a cerebral infarction. The cognitive analysis of her word retrieval disorder exhibited a phonological encoding dysfunction. Thus, a phonological procedure was designed addressing the output phonological lexicon using computer analysis of spoken and written words. The effects were tested for trained words, generalization to untrained words, maintenance and specificity. Transfer of improvement to daily life was also assessed. After therapy, the verbal naming of both trained and untrained words was improved at p < 0.001. The improvement was still maintained after 3 months without therapy. This treatment was specific since the word dictation task did not change. Communication in daily life was improved at p < 0.05. This study of a patient with word retrieval disorder due to phonological encoding dysfunction demonstrated the effectiveness of a phonological and multimodal therapeutic treatment.

  11. Human computer confluence applied in healthcare and rehabilitation.

    PubMed

    Viaud-Delmon, Isabelle; Gaggioli, Andrea; Ferscha, Alois; Dunne, Stephen

    2012-01-01

    Human computer confluence (HCC) is an ambitious research program studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding. It is an interdisciplinary field, bringing together researches from horizons as various as pervasive computing, bio-signals processing, neuroscience, electronics, robotics, virtual & augmented reality, and provides an amazing potential for applications in medicine and rehabilitation.

  12. Multimodal optical analysis discriminates freshly extracted human sample of gliomas, metastases and meningiomas from their appropriate controls

    NASA Astrophysics Data System (ADS)

    Zanello, Marc; Poulon, Fanny; Pallud, Johan; Varlet, Pascale; Hamzeh, H.; Abi Lahoud, Georges; Andreiuolo, Felipe; Ibrahim, Ali; Pages, Mélanie; Chretien, Fabrice; di Rocco, Federico; Dezamis, Edouard; Nataf, François; Turak, Baris; Devaux, Bertrand; Abi Haidar, Darine

    2017-02-01

    Delineating tumor margins as accurately as possible is of primordial importance in surgical oncology: extent of resection is associated with survival but respect of healthy surrounding tissue is necessary for preserved quality of life. The real-time analysis of the endogeneous fluorescence signal of brain tissues is a promising tool for defining margins of brain tumors. The present study aims to demonstrate the feasibility of multimodal optical analysis to discriminate fresh samples of gliomas, metastases and meningiomas from their appropriate controls. Tumor samples were studied on an optical fibered endoscope using spectral and fluorescence lifetime analysis and then on a multimodal set-up for acquiring spectral, one and two-photon fluorescence images, second harmonic generation signals and two-photon fluorescence lifetime datasets. The obtained data allowed us to differentiate healthy samples from tumor samples. These results confirmed the possible clinical relevance of this real-time multimodal optical analysis. This technique can be easily applied to neurosurgical procedures for a better delineation of surgical margins.

  13. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    PubMed Central

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (An

  14. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval.

    PubMed

    Bonnici, Heidi M; Richter, Franziska R; Yazar, Yasemin; Simons, Jon S

    2016-05-18

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (AnG) contribute to the retrieval of

  15. Understanding Student Language: An Unsupervised Dialogue Act Classification Approach

    ERIC Educational Resources Information Center

    Ezen-Can, Aysu; Boyer, Kristy Elizabeth

    2015-01-01

    Within the landscape of educational data, textual natural language is an increasingly vast source of learning-centered interactions. In natural language dialogue, student contributions hold important information about knowledge and goals. Automatically modeling the dialogue act of these student utterances is crucial for scaling natural language…

  16. Health care managers learning by listening to subordinates' dialogue training.

    PubMed

    Grill, C; Ahlborg, G; Wikström, E

    2014-01-01

    Middle managers in health care today are expected to continuously and efficiently decide and act in administration, finance, care quality, and work environment, and strategic communication has become paramount. Since dialogical communication is considered to promote a healthy work environment, the purpose of this paper is to investigate the ways in which health care managers experienced observing subordinates' dialogue training. A qualitative study using semi-structured interviews and documents from eight middle managers in a dialogue programme intervention conducted by dialogue trainers. Focus was on fostering and assisting workplace dialogue. Conventional qualitative content analysis was used. Managers' experiences were both enriching and demanding, and consisted of becoming aware of communication, meaning perceiving interaction between subordinates as well as own silent interaction with subordinates and trainer; Discovering communicative actions for leadership, by gaining self-knowledge and recognizing relational leadership models from trainers--such as acting democratically and pedagogically--and converting theory into practice, signifying practising dialogue-promoting conversation behaviour with subordinates, peers, and superiors. Only eight managers participated in the intervention, but data afforded a basis for further research. Findings stressed the importance of listening, and of support from superiors, for well-functioning leadership communication at work. Studies focusing on health care managers' communication and dialogue are few. This study contributes to knowledge about these activities in managerial leadership.

  17. Geometric Computation of Human Gyrification Indexes from Magnetic Resonance Images

    DTIC Science & Technology

    2009-04-01

    GEOMETRIC COMPUTATION OF HUMAN GYRIFICATION INDEXES FROM MAGNETIC RESONANCE IMAGES By Shu Su Tonya White Marcus Schmidt Chiu-Yen Kao and Guillermo...00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Geometric Computation of Human Gyrification Indexes from Magnetic Resonance Images 5a. CONTRACT NUMBER... Geometric Computation of Gyrification Indexes Chiu-Yen Kao 1 Geometric Computation of Human Gyrification

  18. An Interdisciplinary Bibliography for Computers and the Humanities Courses.

    ERIC Educational Resources Information Center

    Ehrlich, Heyward

    1991-01-01

    Presents an annotated bibliography of works related to the subject of computers and the humanities. Groups items into textbooks and overviews; introductions; human and computer languages; literary and linguistic analysis; artificial intelligence and robotics; social issue debates; computers' image in fiction; anthologies; writing and the…

  19. Mothers' multimodal information processing is modulated by multimodal interactions with their infants

    PubMed Central

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-01-01

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts. PMID:25322936

  20. Esperanza y Poder: Democratic Dialogue and Authentic Parent Involvement

    ERIC Educational Resources Information Center

    Stratton, Susan

    2006-01-01

    This study explored ways to increase authentic participation of Mexican American parents in the education of their children. It focused on direct dialogue between Spanish-speaking parents and English-speaking school personnel and how dialogue facilitated group development. The design of the study included phenomenological inquiry and action…

  1. Learning to Talk/Talking to Learn: Teaching Critical Dialogue

    ERIC Educational Resources Information Center

    Marchel, Carol A.

    2007-01-01

    Critical dialogue skills are a beneficial tool for reflective educational practice. Pre-service teachers can learn to examine underlying biases and assumptions that influence many important aspects of educational practice. Critical dialogue skills are thus of particular importance for work with diverse students and their families. This paper…

  2. The Dialogue Journal: A Tool for Building Better Writers

    ERIC Educational Resources Information Center

    Denne-Bolton, Sara

    2013-01-01

    Using dialogue journals gives English language learners valuable writing practice. This article explores topics such as audience, fluency, teacher-student relationships, empowerment, and making the connection to academic writing. And the author gives practical advice on how teachers can institute dialogue journals in their classrooms and how best…

  3. High-Fidelity Design of Multimodal Restorative Interventions in Gulf War Illness

    DTIC Science & Technology

    2017-10-01

    Bockmayr A, Klarner H, Siebert H. Time series dependent analysis of unparametrized Thomas networks. IEEE/ACM Transactions on Computational Biology and...Award Number: W81XWH-15-1-0582 TITLE:High-Fidelity Design of Multimodal Restorative Interventions in Gulf War Illness PRINCIPAL INVESTIGATOR...not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation. REPORT

  4. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    PubMed

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging

  5. The Gandhi Project: Dialogos Philosophical Dialogues and the Ethics and Politics of Intercultural and Interfaith Friendship

    ERIC Educational Resources Information Center

    Helskog, Guro Hansen

    2015-01-01

    The overarching question addressed in this paper is the following: Can Dialogos dialogues conducted over time lead to the development of respect, mutual understanding and friendship among participants with diverse cultural and life stance backgrounds? Dialogos is a pedagogical approach to practical philosophy aimed at enhancing human maturity and…

  6. "Filming in Progress": New Spaces for Multimodal Designing

    ERIC Educational Resources Information Center

    Mills, Kathy A.

    2010-01-01

    Global trends call for new research to investigate multimodal designing mediated by new technologies and the implications for classroom spaces. This article addresses the relationship between new technologies, students' multimodal designing, and the social production of classroom spaces. Multimodal semiotics and sociological principles are applied…

  7. The Paradox of Dialogue

    ERIC Educational Resources Information Center

    Murphy, Peter

    2011-01-01

    The Council of Europe's 2008 "White Paper on Intercultural Dialogue" signalled--with a measure of deep concern--the limits of multiculturalism and its attendant problems of identity politics, communal segregation, and the undermining of rights and freedoms in culturally closed communities. The White Paper proposed the replacement of the…

  8. A Dialogue on Competitiveness.

    ERIC Educational Resources Information Center

    Gomory, Ralph E.; Shapiro, Harold T.

    1988-01-01

    Presents a dialogue between Ralph E. Gomory of IBM and Harold T. Shapiro of Princeton University concerning what science, technology, and education can and cannot do to establish industrial leadership. The discussion focuses on the role of universities and industry, scientific literacy, and cooperation between universities and industry. (YP)

  9. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  10. Using Intergroup Dialogue to Promote Social Justice and Change

    ERIC Educational Resources Information Center

    Dessel, Adrienne; Rogge, Mary E.; Garlington, Sarah B.

    2006-01-01

    Intergroup dialogue is a public process designed to involve individuals and groups in an exploration of societal issues such as politics, racism, religion, and culture that are often flashpoints for polarization and social conflict. This article examines intergroup dialogue as a bridging mechanism through which social workers in clinical, other…

  11. Robust Multimodal Dictionary Learning

    PubMed Central

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  12. Human factors dimensions in the evolution of increasingly automated control rooms for near-earth satellites

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1982-01-01

    The NASA-Goddard Space Flight Center is responsible for the control and ground support for all of NASA's unmanned near-earth satellites. Traditionally, each satellite had its own dedicated mission operations room. In the mid-seventies, an integration of some of these dedicated facilities was begun with the primary objective to reduce costs. In this connection, the Multi-Satellite Operations Control Center (MSOCC) was designed. MSOCC represents currently a labor intensive operation. Recently, Goddard has become increasingly aware of human factors and human-machine interface issues. A summary is provided of some of the attempts to apply human factors considerations in the design of command and control environments. Current and future activities with respect to human factors and systems design are discussed, giving attention to the allocation of tasks between human and computer, and the interface for the human-computer dialogue.

  13. Is Human-Computer Interaction Social or Parasocial?

    ERIC Educational Resources Information Center

    Sundar, S. Shyam

    Conducted in the attribution-research paradigm of social psychology, a study examined whether human-computer interaction is fundamentally social (as in human-human interaction) or parasocial (as in human-television interaction). All 30 subjects (drawn from an undergraduate class on communication) were exposed to an identical interaction with…

  14. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  15. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  16. "He Said What?!" Constructed Dialogue in Various Interface Modes

    ERIC Educational Resources Information Center

    Young, Lesa; Morris, Carla; Langdon, Clifton

    2012-01-01

    This study analyzes the manifestation of constructed dialogue in ASL narratives as dependent on the interface mode (i.e., face-to-face conversation, electronic conversation over videophone, and vlog monologues). Comparisons of eye gaze over three interface modes shows how aspects of constructed dialogue are altered to fit the communication mode.…

  17. Facilitating Difficult Dialogues at the Intersections of Religious Privilege

    ERIC Educational Resources Information Center

    Watt, Sherry K.

    2009-01-01

    A core definition of a "difficult dialogue" is a verbal or written exchange of ideas or opinions among citizens within a community that centers on an awakening of potentially conflicting views about beliefs and values. As informed by Fried's definition of religious privilege (2007), difficult dialogue at the intersections of religious privilege…

  18. Brokered dialogue: A new research method for controversial health and social issues.

    PubMed

    Parsons, Janet A; Lavery, James V

    2012-07-02

    Dialogue is a foundational feature of social life and an important way in which we come to understand one another. In situations of controversy dialogue is often absent because of a range of social barriers. We have developed a new film-based qualitative research method for studying controversial issues in healthcare and social policy. We call this method Brokered Dialogue. Theoretically informed by the traditions in narrative inquiry and visual anthropology, the method is premised on the idea that dialogue possesses features making it unique as a generator of new knowledge and opportunities for social intervention. Film is not only an extraordinarily rich data source, but an excellent medium for knowledge transfer and dissemination. The paper introduces the Brokered Dialogue method. We outline its critical steps, including the procedures for sampling, data collection and data analysis of both textual and visual data. Participants in a Brokered Dialogue engage in filmed interviews that capture their perspectives on a given topic; they then share their perspectives with, and pose questions of, one another through the medium of film. Using a participatory editing process, only footage that participants feel comfortable showing to others is incorporated. This technique offers participants a 'safe' space for respectful interaction. The editing process itself is analytic, and the final assembly of footage approximates a dialogue on the topic at hand. A link to a film produced from a project piloting the method is provided to demonstrate its real world application. Brokered Dialogue is a method for promoting respectful interactions among those with seemingly divergent views on a controversial topic and for discovering critical points of divergence that may represent pathways for improvement. While the end product is a 'film', the goal is to have these films used as catalysts for ongoing respectful dialogue and problem-solving concerning the topic at hand informing

  19. Capabilities for Intercultural Dialogue

    ERIC Educational Resources Information Center

    Crosbie, Veronica

    2014-01-01

    The capabilities approach offers a valuable analytical lens for exploring the challenge and complexity of intercultural dialogue in contemporary settings. The central tenets of the approach, developed by Amartya Sen and Martha Nussbaum, involve a set of humanistic goals including the recognition that development is a process whereby people's…

  20. Multimode-singlemode-multimode fiber sensor for alcohol sensing application

    NASA Astrophysics Data System (ADS)

    Rofi'ah, Iftihatur; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol is volatile and flammable liquid which is soluble substances both on polar and non polar substances that has been used in some industrial sectors. Alcohol detection method now widely used one of them is the optical fiber sensor. In this paper used fiber optic sensor based on Multimode-Single-mode-Multimode (MSM) to detect alcohol solution at a concentration range of 0-3%. The working principle of sensor utilizes the modal interference between the core modes and the cladding modes, thus make the sensor sensitive to environmental changes. The result showed that characteristic of the sensor not affect the length of the single-mode fiber (SMF). We obtain that the sensor with a length of 5 mm of single-mode can sensing the alcohol with a sensitivity of 0.107 dB/v%.

  1. Nanoengineered multimodal contrast agent for medical image guidance

    NASA Astrophysics Data System (ADS)

    Perkins, Gregory J.; Zheng, Jinzi; Brock, Kristy; Allen, Christine; Jaffray, David A.

    2005-04-01

    Multimodality imaging has gained momentum in radiation therapy planning and image-guided treatment delivery. Specifically, computed tomography (CT) and magnetic resonance (MR) imaging are two complementary imaging modalities often utilized in radiation therapy for visualization of anatomical structures for tumour delineation and accurate registration of image data sets for volumetric dose calculation. The development of a multimodal contrast agent for CT and MR with prolonged in vivo residence time would provide long-lasting spatial and temporal correspondence of the anatomical features of interest, and therefore facilitate multimodal image registration, treatment planning and delivery. The multimodal contrast agent investigated consists of nano-sized stealth liposomes encapsulating conventional iodine and gadolinium-based contrast agents. The average loading achieved was 33.5 +/- 7.1 mg/mL of iodine for iohexol and 9.8 +/- 2.0 mg/mL of gadolinium for gadoteridol. The average liposome diameter was 46.2 +/- 13.5 nm. The system was found to be stable in physiological buffer over a 15-day period, releasing 11.9 +/- 1.1% and 11.2 +/- 0.9% of the total amounts of iohexol and gadoteridol loaded, respectively. 200 minutes following in vivo administration, the contrast agent maintained a relative contrast enhancement of 81.4 +/- 13.05 differential Hounsfield units (ΔHU) in CT (40% decrease from the peak signal value achieved 3 minutes post-injection) and 731.9 +/- 144.2 differential signal intensity (ΔSI) in MR (46% decrease from the peak signal value achieved 3 minutes post-injection) in the blood (aorta), a relative contrast enhancement of 38.0 +/- 5.1 ΔHU (42% decrease from the peak signal value achieved 3 minutes post-injection) and 178.6 +/- 41.4 ΔSI (62% decrease from the peak signal value achieved 3 minutes post-injection) in the liver (parenchyma), a relative contrast enhancement of 9.1 +/- 1.7 ΔHU (94% decrease from the peak signal value achieved 3 minutes

  2. On the Rhetorical Contract in Human-Computer Interaction.

    ERIC Educational Resources Information Center

    Wenger, Michael J.

    1991-01-01

    An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…

  3. Researching Teachers' and Parents' Perceptions of Dialogue

    ERIC Educational Resources Information Center

    Tveit, Anne Dorthe

    2014-01-01

    While there has been a great deal of research done on parent involvement and the challenges of conducting effective dialogue in parent-teacher meetings, less attention has been paid to how teachers and parents themselves perceive dialogue. The purpose of the present article is to study whether deliberative principles are vital to teachers'…

  4. The High Stakes of Artificial Dialogue in Teacher Education

    ERIC Educational Resources Information Center

    Simpson, Douglas J.

    2009-01-01

    Talking about important events, experiences, and ideas is a crucial societal concern for many reasons. In the field of teacher education, dialogue may be even more difficult because it is sometimes seen as being both essential and troubling. Dialogue is complicated because some people are fearful of open inquiry; others are inclined to rant; and…

  5. Exploring Meteorology Education in Community College: Lecture-based Instruction and Dialogue-based Group Learning

    NASA Astrophysics Data System (ADS)

    Finley, Jason Paul

    This study examined the impact of dialogue-based group instruction on student learning and engagement in community college meteorology education. A quasi-experimental design was used to compare lecture-based instruction with dialogue-based group instruction during two class sessions at one community college in southern California. Pre- and post-tests were used to measure learning and interest, while surveys were conducted two days after the learning events to assess engagement, perceived learning, and application of content. The results indicated that the dialogue-based group instruction was more successful in helping students learn than the lecture-based instruction. Each question that assessed learning had a higher score for the dialogue group that was statistically significant (alpha < 0.05) compared to the lecture group. The survey questions about perceived learning and application of content also exhibited higher scores that were statistically significant for the dialogue group. The qualitative portion of these survey questions supported the quantitative results and showed that the dialogue students were able to remember more concepts and apply these concepts to their lives. Dialogue students were also more engaged, as three out of the five engagement-related survey questions revealed statistically significantly higher scores for them. The qualitative data also supported increased engagement for the dialogue students. Interest in specific meteorological topics did not change significantly for either group of students; however, interest in learning about severe weather was higher for the dialogue group. Neither group found the learning events markedly meaningful, although more students from the dialogue group found pronounced meaning centered on applying severe weather knowledge to their lives. Active engagement in the dialogue approach kept these students from becoming distracted and allowed them to become absorbed in the learning event. This higher engagement

  6. Effects of dialogue groups on physicians' work environment.

    PubMed

    Bergman, David; Arnetz, Bengt; Wahlström, Rolf; Sandahl, Christer

    2007-01-01

    The purpose of this study is to evaluate whether dialogue groups for physicians can improve their psychosocial work environment. The study assessed the impact of eight dialogue groups, which involved 60 physicians at a children's clinic in one of the main hospitals in Stockholm. Psychosocial work environment measures were collected through a validated instrument sent to all physicians (n = 68) in 1999, 2001 and 2003. Follow-up data were collected after the termination of the groups. The overall score of organizational and staff wellbeing, as assessed by the physicians at the clinic, deteriorated from 1999 until 2003 and then improved 2004. This shift in the trend coincided with the intervention. No other factors which might explain this shift could be identified. In a naturalistic study of this kind it is not possible to prove any causal relationships. A controlled survey of management programmes concerning the work environment among physicians would be of interest for further research. The results suggest that dialogue groups may be one way to improve the psychosocial work environment for physicians. There is a lack of intervention studies regarding the efficacy of management programmes directed toward physicians, concerning the effects on professional and personal wellbeing. This is the first time dialogue groups have been studied within a health care setting.

  7. Programmable multimode quantum networks

    PubMed Central

    Armstrong, Seiji; Morizur, Jean-François; Janousek, Jiri; Hage, Boris; Treps, Nicolas; Lam, Ping Koy; Bachor, Hans-A.

    2012-01-01

    Entanglement between large numbers of quantum modes is the quintessential resource for future technologies such as the quantum internet. Conventionally, the generation of multimode entanglement in optics requires complex layouts of beamsplitters and phase shifters in order to transform the input modes into entangled modes. Here we report the highly versatile and efficient generation of various multimode entangled states with the ability to switch between different linear optics networks in real time. By defining our modes to be combinations of different spatial regions of one beam, we may use just one pair of multi-pixel detectors in order to measure multiple entangled modes. We programme virtual networks that are fully equivalent to the physical linear optics networks they are emulating. We present results for N=2 up to N=8 entangled modes here, including N=2, 3, 4 cluster states. Our approach introduces the highly sought after attributes of flexibility and scalability to multimode entanglement. PMID:22929783

  8. Health care education for dialogue and dialogic relationships.

    PubMed

    Glen, S

    1999-01-01

    This article will address the question: how can health care education best take seriously the task of educating for professional practice within a post-traditional, liberal democratic society? In the setting of modernity, the altered personal and professional self has to be explored and constructed as part of a reflective process of connecting personal and professional change: in essence, to develop self-knowledge. A moral life, or 'working morality', that evolves out of a process of ongoing dialogue and conversation is required. What is advocated here is a more social model of health care education that acknowledges a social or communal dimension to knowledge and the centrality of relationships for the full development of the individual personally and professionally, fosters our capacity to identify who we are both personally and professionally, connects reason and dialogue, and educates for dialogue and dialogic relationships.

  9. Languaging in Cyberspace: A Case Study of the Effects of Peer-Peer Collaborative Dialogue on the Acquisition of English Idioms in Task-Based Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Teng, Xuan

    2015-01-01

    Despite the growing interest in examining the link between peer-peer collaborative dialogue and second language (L2) development in recent years (Swain, Brooks, & Tocalli-Beller, 2002), much of the empirical work in this regard focused on face-to-face communication, leaving the operationalization of collaborative dialogue in text-based…

  10. Adapting and Implementing Open Dialogue in the Scandinavian Countries: A Scoping Review.

    PubMed

    Buus, Niels; Bikic, Aida; Jacobsen, Elise Kragh; Müller-Nielsen, Klaus; Aagaard, Jørgen; Rossen, Camilla Blach

    2017-05-01

    Open Dialogue is a resource-oriented mental health approach, which mobilises a crisis-struck person's psychosocial network resources. This scoping review 1) identifies the range and nature of literature on the adoption of Open Dialogue in Scandinavia in places other than the original sites in Finland, and 2) summarises this literature. We included 33 publications. Most studies in this scoping review were published as "grey" literature and most grappled with how to implement Open Dialogue faithfully. In the Scandinavian research context, Open Dialogue was mainly described as a promising and favourable approach to mental health care.

  11. 78 FR 70281 - United States-Mexico High Level Economic Dialogue

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... DEPARTMENT OF COMMERCE United States-Mexico High Level Economic Dialogue AGENCY: International... stakeholder input to the Federal Register notice on the United States-Mexico High Level Economic Dialogue... economic platforms in the world. The sustained competitiveness and continued growth of the North American...

  12. 'Easier said than done': empowering dialogues with patients at the pain clinic - the health professionals' perspective.

    PubMed

    Tveiten, Sidsel; Meyer, Ingrid

    2009-11-01

    The aim of the present study was to develop knowledge of the dialogue between the health professionals and the patient in the empowerment process. Dialogue is important regarding empowerment. Transcript-based qualitative content analysis was used to reveal the meaning of five health professionals' views and reflections as reported during three focus group interviews. The dialogues are important and have varying purposes and characteristics. Conducting good dialogues represents challenges. Engaging in dialogues according to the principles of empowerment was easier said than done. Establishing supervision groups, considering the dialogue as part of the therapy and organizing the service in a way that makes dialogues and real participation possible. Further research may focus on the patients' views and reflections regarding the dialogues with the health professionals. What is new knowledge about the complexity and the challenges in conducting dialogues in the empowerment process?

  13. Multimodal registration via spatial-context mutual information.

    PubMed

    Yi, Zhao; Soatto, Stefano

    2011-01-01

    We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.

  14. Nanoparticles in Higher-Order Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Rieffel, James Ki

    Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.

  15. Three visions of doctoring: a Gadamerian dialogue.

    PubMed

    Chin-Yee, Benjamin; Messinger, Atara; Young, L Trevor

    2018-04-16

    Medicine in the twenty-first century faces an 'identity crisis,' as it grapples with the emergence of various 'ways of knowing,' from evidence-based and translational medicine, to narrative-based and personalized medicine. While each of these approaches has uniquely contributed to the advancement of patient care, this pluralism is not without tension. Evidence-based medicine is not necessary individualized; personalized medicine may be individualized but is not necessarily person-centered. As novel technologies and big data continue to proliferate today, the focus of medical practice is shifting away from the dialogic encounter between doctor and patient, threatening the loss of humanism that many view as integral to medicine's identity. As medical trainees, we struggle to synthesize medicine's diverse and evolving 'ways of knowing' and to create a vision of doctoring that integrates new forms of medical knowledge into the provision of person-centered care. In search of answers, we turned to twentieth-century philosopher Hans-Georg Gadamer, whose unique outlook on "health" and "healing," we believe, offers a way forward in navigating medicine's 'messy pluralism.' Drawing inspiration from Gadamer's emphasis on dialogue and 'practical wisdom' (phronesis), we initiated a dialogue with the dean of our medical school to address the question of how medical trainees and practicing clinicians alike can work to create a more harmonious pluralism in medicine today. We propose that implementing a pluralistic approach ultimately entails 'bridging' the current divide between scientific theory and the practical art of healing, and involves an iterative and dialogic process of asking questions and seeking answers.

  16. Parallel structures in human and computer memory

    NASA Astrophysics Data System (ADS)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  17. Parallel structures in human and computer memory

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    If one thinks of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library. One recognizes a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. A computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do is constructed. Such a memory is remarkably similiar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. It is concluded that the frame problem of artificial intelligence could be solved by the use of such a memory if one were able to encode information about the world properly.

  18. Enhancing image classification models with multi-modal biomarkers

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry

    2011-03-01

    Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.

  19. Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help?

    ERIC Educational Resources Information Center

    Iordanou, Kalypso

    2013-01-01

    Arguing on the computer was used as a method to promote development of face-to-face argumentation skills in middle schoolers. In the study presented, sixth graders engaged in electronic dialogues with peers on a controversial topic and in some reflective activities based on transcriptions of the dialogues. Although participants initially exhibited…

  20. Hardware Considerations for Computer Based Education in the 1980's.

    ERIC Educational Resources Information Center

    Hirschbuhl, John J.

    1980-01-01

    In the future, computers will be needed to sift through the vast proliferation of available information. Among new developments in computer technology are the videodisc microcomputers and holography. Predictions for future developments include laser libraries for the visually handicapped and Computer Assisted Dialogue. (JN)

  1. WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.

    Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed

  2. A Response to Jane Sahi's "Dialogue as Education: Martin Buber"

    ERIC Educational Resources Information Center

    Baniwal, Vikas

    2014-01-01

    This article is inspired by Jane Sahi's commentary, "Dialogue as Education: Martin Buber," published under the feature "Classics with Commentary" in the Monsoon 2005 issue of "Contemporary Education Dialogue." I seek to further the discussion of the contributions of Martin Buber to the discourse of education through…

  3. Survey of statewide multimodal transportation planning practices.

    DOT National Transportation Integrated Search

    2002-01-01

    Multimodal planning refers to planning for different modes of transportation (e.g., automobile, bus, bicycles, pedestrian, aviation, rail, waterways) and the connections among them. This study identified states thought to excel in multimodal planning...

  4. Assessing Physician-Patient Dialogues About Chronic Migraine During Routine Office Visits.

    PubMed

    Buse, Dawn C; Gillard, Patrick; Arctander, Kaitlyn; Kuang, Amy W; Lipton, Richard B

    2018-05-04

    To assess physician-patient communication and identify the frequency of use of specific communication techniques by analyzing recordings of routinely scheduled medical encounters for patients with clinician-identified chronic migraine. Chronic migraine is an under-diagnosed, under-treated, and highly burdensome disease. Effective medical communication is integral to optimal medical care, including providing accurate diagnoses, creating effective treatment plans, and enhancing patient adherence. Communication patterns during office visits may be a target for intervention to improve outcomes for people with chronic migraine. This was a prospective, observational study based on analysis of audio recordings collected during neurologist-patient chronic migraine dialogues. Twenty neurologists from a US neurology panel maintained by Verilogue, Inc., a research organization specializing in healthcare dialogues, were invited to identify patients with chronic migraine and record clinical encounters with their patients. Both new patient visits and follow-up visits were included in this analysis. Neurologist-patient dialogues were audio-recorded, anonymized, transcribed, and analyzed by a sociolinguist for the presence of prespecified communication parameters, strategies, and specific language indicative of optimal migraine-related medical care. Fourteen out of the 20 invited neurologists (70.0%) accepted the study invitation and recorded 35 encounters with patients eligible for the study. The patient sample was 91.4% female (n = 32/35), with a mean age of 46 years. On average, there were 17 headache-related questions per visit; 82.0% of questions were closed-ended (n = 369/450). Headache/migraine frequency was elicited in 77.1% of the dialogues (n = 27/35), but headache days per month was assessed in only a single dialogue. Only one neurologist utilized the ask-tell-ask technique. Headache-related disability was discussed in 22.9%of the dialogues (n = 8

  5. Brokered dialogue: A new research method for controversial health and social issues

    PubMed Central

    2012-01-01

    Abstract Background Dialogue is a foundational feature of social life and an important way in which we come to understand one another. In situations of controversy dialogue is often absent because of a range of social barriers. We have developed a new film-based qualitative research method for studying controversial issues in healthcare and social policy. We call this method Brokered Dialogue. Theoretically informed by the traditions in narrative inquiry and visual anthropology, the method is premised on the idea that dialogue possesses features making it unique as a generator of new knowledge and opportunities for social intervention. Film is not only an extraordinarily rich data source, but an excellent medium for knowledge transfer and dissemination. Discussion The paper introduces the Brokered Dialogue method. We outline its critical steps, including the procedures for sampling, data collection and data analysis of both textual and visual data. Participants in a Brokered Dialogue engage in filmed interviews that capture their perspectives on a given topic; they then share their perspectives with, and pose questions of, one another through the medium of film. Using a participatory editing process, only footage that participants feel comfortable showing to others is incorporated. This technique offers participants a ‘safe’ space for respectful interaction. The editing process itself is analytic, and the final assembly of footage approximates a dialogue on the topic at hand. A link to a film produced from a project piloting the method is provided to demonstrate its real world application. Summary Brokered Dialogue is a method for promoting respectful interactions among those with seemingly divergent views on a controversial topic and for discovering critical points of divergence that may represent pathways for improvement. While the end product is a ‘film’, the goal is to have these films used as catalysts for ongoing respectful dialogue and problem

  6. Rapid multi-modality preregistration based on SIFT descriptor.

    PubMed

    Chen, Jian; Tian, Jie

    2006-01-01

    This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.

  7. Intergroup Dialogue: Education for a Broad Conception of Civic Engagement

    ERIC Educational Resources Information Center

    Gurin, Patricia; Nagda, Biren A.; Sorensen, Nicholas

    2011-01-01

    Intergroup dialogue provides what students need in order to relate and collaborate across differences, something they have to do in community projects that usually involve interactions across racial, social class, religious, and geographical divides. In this article, the authors demonstrate the efficacy of intergroup dialogue, drawing from a…

  8. Analyzing Empirical Notions of Suffering: Advancing Youth Dialogue and Education

    ERIC Educational Resources Information Center

    Baring, Rito V.

    2010-01-01

    This article explores the possibilities of advancing youth dialogue and education among the Filipino youth using empirical notions of students on suffering. Examining empirical data, this analysis exposes uncharted notions of suffering and shows relevant meanings that underscore the plausible trappings of youth dialogue and its benefits on…

  9. Exploring Poetry through Interactive Computer Programs.

    ERIC Educational Resources Information Center

    Nimchinsky, Howard; Camp, Jocelyn

    The goal of a project was to design, test, and evaluate several computer programs that allow students in introductory literature and poetry courses to explore a poem in detail and, through a dialogue with the program, to develop their own interpretation of it. Computer programs were completed on poems by Robert Frost and W.H. Auden. Both programs…

  10. Personalized, relevance-based Multimodal Robotic Imaging and augmented reality for Computer Assisted Interventions.

    PubMed

    Navab, Nassir; Fellow, Miccai; Hennersperger, Christoph; Frisch, Benjamin; Fürst, Bernhard

    2016-10-01

    In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges. Copyright © 2016. Published by Elsevier B.V.

  11. Coordination of the health policy dialogue process in Guinea: pre- and post-Ebola.

    PubMed

    Ade, Nadege; Réne, Adzodo; Khalifa, Mara; Babila, Kevin Ousman; Monono, Martin Ekeke; Tarcisse, Elongo; Nabyonga-Orem, Juliet

    2016-07-18

    Policy dialogue can be defined as an iterative process that involves a broad range of stakeholders discussing a particular issue with a concrete purpose in mind. Policy dialogue in health is increasingly being recognised by health stakeholders in developing countries, as an important process or mechanism for improving collaboration and harmonization in health and for developing comprehensive and evidence-based health sector strategies and plans. It is with this perspective in mind that Guinea, in 2013, started a policy dialogue process, engaging a plethora of actors to revise the country's national health policy and develop a new national health development plan (2015-2024). This study examines the coordination of the policy dialogue process in developing these key strategic governance documents of the Guinean health sector from the actors' perspective. A qualitative case study approach was undertaken, comprising of interviews with key stakeholders who participated in the policy dialogue process. A review of the literature informed the development of a conceptual framework and the data collection survey questionnaire. The results were analysed both inductively and deductively. A total of 22 out of 32 individuals were interviewed. The results suggest both areas of strengths and weaknesses in the coordination of the policy dialogue process in Guinea. The aspects of good coordination observed were the iterative nature of the dialogue and the availability of neutral and well-experienced facilitators. Weak coordination was perceived through the unavailability of supporting documentation, time and financial constraints experienced during the dialogue process. The onset of the Ebola epidemic in Guinea impacted on coordination dynamics by causing a slowdown of its activities and then its virtual halt. The findings herein highlight the need for policy dialogue coordination structures to have the necessary administrative and institutional support to facilitate their

  12. Cortical inter-hemispheric circuits for multimodal vocal learning in songbirds.

    PubMed

    Paterson, Amy K; Bottjer, Sarah W

    2017-10-15

    Vocal learning in songbirds and humans is strongly influenced by social interactions based on sensory inputs from several modalities. Songbird vocal learning is mediated by cortico-basal ganglia circuits that include the SHELL region of lateral magnocellular nucleus of the anterior nidopallium (LMAN), but little is known concerning neural pathways that could integrate multimodal sensory information with SHELL circuitry. In addition, cortical pathways that mediate the precise coordination between hemispheres required for song production have been little studied. In order to identify candidate mechanisms for multimodal sensory integration and bilateral coordination for vocal learning in zebra finches, we investigated the anatomical organization of two regions that receive input from SHELL: the dorsal caudolateral nidopallium (dNCL SHELL ) and a region within the ventral arcopallium (Av). Anterograde and retrograde tracing experiments revealed a topographically organized inter-hemispheric circuit: SHELL and dNCL SHELL , as well as adjacent nidopallial areas, send axonal projections to ipsilateral Av; Av in turn projects to contralateral SHELL, dNCL SHELL , and regions of nidopallium adjacent to each. Av on each side also projects directly to contralateral Av. dNCL SHELL and Av each integrate inputs from ipsilateral SHELL with inputs from sensory regions in surrounding nidopallium, suggesting that they function to integrate multimodal sensory information with song-related responses within LMAN-SHELL during vocal learning. Av projections share this integrated information from the ipsilateral hemisphere with contralateral sensory and song-learning regions. Our results suggest that the inter-hemispheric pathway through Av may function to integrate multimodal sensory feedback with vocal-learning circuitry and coordinate bilateral vocal behavior. © 2017 Wiley Periodicals, Inc.

  13. Students as resurrectionists--A multimodal humanities project in anatomy putting ethics and professionalism in historical context.

    PubMed

    Hammer, Rachel R; Jones, Trahern W; Hussain, Fareeda Taher Nazer; Bringe, Kariline; Harvey, Ronee E; Person-Rennell, Nicole H; Newman, James S

    2010-01-01

    Because medical students have many different learning styles, the authors, medical students at Mayo Clinic, College of Medicine researched the history of anatomical specimen procurement, reviewing topic-related film, academic literature, and novels, to write, direct, and perform a dramatization based on Robert Louis Stevenson's The Body-Snatcher. Into this performance, they incorporated dance, painting, instrumental and vocal performance, and creative writing. In preparation for the performance, each actor researched an aspect of the history of anatomy. These micro-research projects were presented in a lecture before the play. Not intended to be a research study, this descriptive article discusses how student research and ethics discussions became a theatrical production. This addition to classroom and laboratory learning addresses the deep emotional response experienced by some students and provides an avenue to understand and express these feelings. This enhanced multimodal approach to"holistic learning" could be applied to any topic in the medical school curriculum, thoroughly adding to the didactics with history, humanities, and team dynamics.

  14. The Socratic Dialogue in Asynchronous Online Discussions: Is Constructivism Redundant?

    ERIC Educational Resources Information Center

    Kingsley, Paul

    2011-01-01

    Purpose: This paper aims to examine Socratic dialogue in asynchronous online discussions in relation to constructivism. The links between theory and practice in teaching are to be discussed whilst tracing the origins of Socratic dialogue and recent trends and use of seminar in research based institutions. Design/methodology/approach: Many online…

  15. Building dialogues between clinical and biomedical research through cross-species collaborations.

    PubMed

    Chao, Hsiao-Tuan; Liu, Lucy; Bellen, Hugo J

    2017-10-01

    Today, biomedical science is equipped with an impressive array of technologies and genetic resources that bolster our basic understanding of fundamental biology and enhance the practice of modern medicine by providing clinicians with a diverse toolkit to diagnose, prognosticate, and treat a plethora of conditions. Many significant advances in our understanding of disease mechanisms and therapeutic interventions have arisen from fruitful dialogues between clinicians and biomedical research scientists. However, the increasingly specialized scientific and medical disciplines, globalization of science and technology, and complex datasets often hinder the development of effective interdisciplinary collaborations between clinical medicine and biomedical research. The goal of this review is to provide examples of diverse strategies to enhance communication and collaboration across diverse disciplines. First, we discuss examples of efforts to foster interdisciplinary collaborations at institutional and multi-institutional levels. Second, we explore resources and tools for clinicians and research scientists to facilitate effective bi-directional dialogues. Third, we use our experiences in neurobiology and human genetics to highlight how communication between clinical medicine and biomedical research lead to effective implementation of cross-species model organism approaches to uncover the biological underpinnings of health and disease. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Pilots of the future - Human or computer?

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Nagel, D. C.

    1985-01-01

    In connection with the occurrence of aircraft accidents and the evolution of the air-travel system, questions arise regarding the computer's potential for making fundamental contributions to improving the safety and reliability of air travel. An important result of an analysis of the causes of aircraft accidents is the conclusion that humans - 'pilots and other personnel' - are implicated in well over half of the accidents which occur. Over 70 percent of the incident reports contain evidence of human error. In addition, almost 75 percent show evidence of an 'information-transfer' problem. Thus, the question arises whether improvements in air safety could be achieved by removing humans from control situations. In an attempt to answer this question, it is important to take into account also certain advantages which humans have in comparison to computers. Attention is given to human error and the effects of technology, the motivation to automate, aircraft automation at the crossroads, the evolution of cockpit automation, and pilot factors.

  17. Designing a Healthy Food Partnership: lessons from the Australian Food and Health Dialogue.

    PubMed

    Jones, Alexandra; Magnusson, Roger; Swinburn, Boyd; Webster, Jacqui; Wood, Amanda; Sacks, Gary; Neal, Bruce

    2016-07-27

    Poor diets are a leading cause of disease burden worldwide. In Australia, the Federal Government established the Food and Health Dialogue (the Dialogue) in 2009 to address this issue, primarily through food reformulation. We evaluated the Dialogue's performance over its 6 years of operation and used these findings to develop recommendations for the success of the new Healthy Food Partnership. We used information from the Dialogue website, media releases, communiqués, e-newsletters, materials released under freedom-of-information, and Parliamentary Hansard to evaluate the Dialogue's achievements from October 2013 to November 2015, using the RE-AIM (reach, efficacy, adoption, implementation and maintenance) framework. We also engaged closely with two former Dialogue members. Our findings update a prior assessment done in October 2013. Little data is available to evaluate the Dialogue's recent achievements, with no information about progress against milestones released since October 2013. In the last 2 years, only one additional set of sodium reduction targets (cheese) was agreed and Quick Service Restaurant foods were added as an area for action. Some activity was identified in 12 of a possible 137 (9 %) areas of action within the Dialogue's mandate. Independent evaluation found targets were partially achieved in some food categories, with substantial variation in success between companies. No effects on the knowledge, behaviours or nutrient intake of the Australian population or evidence of impact on diet-related disease could be identified. The new Healthy Food Partnership has similar goals to the Dialogue. While highly laudable and recognised globally as cost-effective, the mechanism for delivery in Australia has been woefully inadequate. Strong government leadership, adequate funding, clear targets and timelines, management of conflict of interest, comprehensive monitoring and evaluation, and a plan for responsive regulation in the event of missed milestones

  18. How Intergroup Dialogue Facilitators Understand Their Role in Promoting Student Development and Learning

    ERIC Educational Resources Information Center

    Quaye, Stephen John; Johnson, Matthew R.

    2016-01-01

    Intergroup dialogues are co-facilitated, face-to-face dialogues between two groups that have a history of conflict (for example, White people and people of color). Although researchers have explored the outcomes of these dialogues among students, little is known about the role of facilitators. Drawing from a case study of an intergroup dialogue…

  19. Why Students Learn More From Dialogue-Than Monologue-Videos: Analyses of Peer Interactions

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.; Kang, Seokmin; Yaghmourian, David L.

    2017-01-01

    In 2 separate studies, we found that college-age students learned more when they collaboratively watched tutorial dialogue-videos than lecture-style monologue-videos. In fact, they can learn as well as the tutees in the dialogue-videos. These results replicate similar findings in the literature showing the advantage of dialogue-videos even when…

  20. Molecular brain imaging in the multimodality era

    PubMed Central

    Price, Julie C

    2012-01-01

    Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068

  1. Evidence base for multimodal therapy in cachexia.

    PubMed

    Solheim, Tora S; Laird, Barry J A

    2012-12-01

    The lack of success of unimodal treatment studies in cachexia and the growing awareness that multiple components are responsible for the development of cachexia have led to the view that cachexia intervention should include multimodal treatment. The aim of this article is to examine the evidence for multimodal treatment in the management of cancer cachexia. There are some studies involving multimodal treatment that indicate significant effects on cachexia outcomes. There are, however, no randomized controlled trials to date that incorporate fully a structured exercise program, nutrition, good symptom treatment as well as drug treatment, to counteract the effects of altered metabolism. The effectiveness of any drug intervention for cancer cachexia probably will only be maximized if incorporated into multimodal treatment. Further, cachexia treatment trials should also aim to include patients at an early phase in their cachexia trajectory and use validated outcome measures.

  2. Design Options for Multimodal Web Applications

    NASA Astrophysics Data System (ADS)

    Stanciulescu, Adrian; Vanderdonckt, Jean

    The capabilities of multimodal applications running on the web are well de-lineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major design options of such application to pave the way to a structured development life cycle. This paper provides a design space of independent design options for multimodal web applications based on three types of modalities: graphical, vocal, tactile, and combined. On the one hand, these design options may provide designers with some explicit guidance on what to decide or not for their future user interface, while exploring various design alternatives. On the other hand, these design options have been implemented as graph transformations per-formed on a user interface model represented as a graph. Thanks to a transformation engine, it allows designers to play with the different values of each design option, to preview the results of the transformation, and to obtain the corresponding code on-demand

  3. Adapting Collaboration Dialogue in Response to Intelligent Tutoring System Feedback

    ERIC Educational Resources Information Center

    Olsen, Jennifer K.; Aleven, Vincent; Rummel, Nikol

    2015-01-01

    To be able to provide better support for collaborative learning in Intelligent Tutoring Systems, it is important to understand how collaboration patterns change. Prior work has looked at the interdependencies between utterances and the change of dialogue over time, but it has not addressed how dialogue changes during a lesson, an analysis that…

  4. The Role of Multimodal Analgesia in Spine Surgery.

    PubMed

    Kurd, Mark F; Kreitz, Tyler; Schroeder, Gregory; Vaccaro, Alexander R

    2017-04-01

    Optimal postoperative pain control allows for faster recovery, reduced complications, and improved patient satisfaction. Historically, pain management after spine surgery relied heavily on opioid medications. Multimodal regimens were developed to reduce opioid consumption and associated adverse effects. Multimodal approaches used in orthopaedic surgery of the lower extremity, especially joint arthroplasty, have been well described and studies have shown reduced opioid consumption, improved pain and function, and decreased length of stay. A growing body of evidence supports multimodal analgesia in spine surgery. Methods include the use of preemptive analgesia, NSAIDs, the neuromodulatory agents gabapentin and pregabalin, acetaminophen, and extended-action local anesthesia. The development of a standard approach to multimodal analgesia in spine surgery requires extensive assessment of the literature. Because a substantial number of spine surgeries are performed annually, a standardized approach to multimodal analgesia may provide considerable benefits, particularly in the context of the increased emphasis on accountability within the healthcare system.

  5. Multimodal lung cancer screening using the ITALUNG biomarker panel and low dose computed tomography. Results of the ITALUNG biomarker study.

    PubMed

    Carozzi, Francesca Maria; Bisanzi, Simonetta; Carrozzi, Laura; Falaschi, Fabio; Lopes Pegna, Andrea; Mascalchi, Mario; Picozzi, Giulia; Peluso, Marco; Sani, Cristina; Greco, Luana; Ocello, Cristina; Paci, Eugenio

    2017-07-01

    Asymptomatic high-risk subjects, randomized in the intervention arm of the ITALUNG trial (1,406 screened for lung cancer), were enrolled for the ITALUNG biomarker study (n = 1,356), in which samples of blood and sputum were analyzed for plasma DNA quantification (cut off 5 ng/ml), loss of heterozygosity and microsatellite instability. The ITALUNG biomarker panel (IBP) was considered positive if at least one of the two biomarkers included in the panel was positive. Subjects with and without lung cancer diagnosis at the end of the screening cycle with LDCT (n = 517) were evaluated. Out of 18 baseline screen detected lung cancer cases, 17 were IBP positive (94%). Repeat screen-detected lung cancer cases were 18 and 12 of them positive at baseline IBP test (66%). Interval cancer cases (2-years) and biomarker tests after a suspect Non Calcific Nodule follow-up were investigated. The single test versus multimodal screening measures of accuracy were compared in a simulation within the screened ITALUNG intervention arm, considering screen-detected and interval cancer cases. Sensitivity was 90% at baseline screening. Specificity was 71 and 61% for LDCT and IBP as baseline single test, and improved at 89% with multimodal, combined screening. The positive predictive value was 4.3% for LDCT at baseline and 10.6% for multimodal screening. Multimodal screening could improve the screening efficiency at baseline and strategies for future implementation are discussed. If IBP was used as primary screening test, the LDCT burden might decrease of about 60%. © 2017 UICC.

  6. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  7. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2012-01-01

    described in III. Experimental evaluations on a comprehensive multimodal dataset and a face database have been described in section V. Finally, in...WVU Multimodal Dataset The WVU multimodal dataset is a comprehensive collection of different biometric modalities such as fingerprint, iris, palmprint ...Martnez and R. Benavente, “The AR face database ,” CVC Technical Report, June 1998. [29] U. Park and A. Jain, “Face matching and retrieval using soft

  8. Multimodal Trip Planner System final evaluation report.

    DOT National Transportation Integrated Search

    2011-05-01

    This evaluation of the Multimodal Trip Planning System (MMTPS) is the culmination of a multi-year project evaluating the development and deployment of a multimodal trip planner in the Chicagoland area between 2004 and 2010. The report includes an ove...

  9. Inquiry Dialogue in the Classroom.

    ERIC Educational Resources Information Center

    Sprague, Nancy Freitag

    This study investigated the relationship between teacher behavior and pupil reflective dialogue in the classroom assuming that social problems provide a natural springboard for inquiry through classroom discussion. It was hypothesized that different teacher strategies promote different types of class interaction. Discussion styles were to be…

  10. Euthanasia—a dialogue

    PubMed Central

    Berry, P.

    2000-01-01

    A terminally ill man requests that his life be brought to a peaceful end by the doctor overseeing his care. The doctor, an atheist, regretfully declines. The patient, unsatisfied by the answer and increasingly desperate for relief, presses the doctor for an explanation. During the ensuing dialogue the philosophical, ethical and emotional arguments brought to bear by both the doctor and the patient are dissected. Key Words: Euthanasia • physician-assisted suicide • autonomy • empathy • end of life PMID:11055041

  11. Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot

    NASA Technical Reports Server (NTRS)

    Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory

    2002-01-01

    Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.

  12. Towards a Computational Model of Sketching

    DTIC Science & Technology

    2000-01-01

    interaction that sketching provides in human-to- human communication , multimodal research will rely heavily upon, and even drive, AI research . This...can. Dimensions of sketching The power of sketching in human communication arises from the high bandwidth it provides [21] . There is high perceptual

  13. Workplace aggression: beginning a dialogue.

    PubMed

    McLemore, Monica R

    2006-08-01

    The June 2005 Clinical Journal of Oncology Nursing editorial titled "Communication: Whose Problem Is It?" (Griffin-Sobel, 2005) was written to begin a dialogue about a phenomenon frequently experienced yet rarely discussed: workplace aggression, also known as disruptive behavior. Prompted by a groundbreaking study published in the American Journal of Nursing by Rosenstein and O'Daniel (2005), the editorial challenged oncology nurses to begin to fix problems of communication. After reflecting on both of the articles and considering my own experience as a nurse manager, clinician, and scholar, I decided to explore the topic as it relates to nurse-to-nurse workplace aggression. The following is a summary of interviews with nurse managers, nurse practitioners, and nurse scientists about root causes and effective strategies to manage these sometimes complicated situations. This article is meant to continue the dialogue about the very sensitive issue. Confidentiality has been maintained, and I welcome your comments.

  14. Empowering dialogues--the patients' perspective.

    PubMed

    Tveiten, Sidsel; Knutsen, Ingrid Ruud

    2011-06-01

    The aim of the study was to highlight the patients' experiences and perspectives of the dialogue with the health professionals at a pain clinic. This knowledge can develop and give nuanced understanding of patient empowerment and sense of control. Qualitative content analysis was used to reveal the meaning of the patients' experiences and perspectives during focus group interviews. The findings and interpretations revealed the main theme; preconditions and opportunities for participation. The main theme was represented by four subthemes; means for common understanding, basis for collaboration, acknowledgement and legitimacy. The findings and interpretations are discussed in the light of an evolving theory on women's sense of control while experiencing chronic pain and empowerment. The dialogue is very important related to aspects of control, remoralization and demoralization and is affected by external structural factors. This underlines the importance of further research focusing on empowerment and power. © 2010 The Authors. Scandinavian Journal of Caring Sciences © 2010 Nordic College of Caring Science.

  15. Dialogue as a Catalyst for Teacher Change: A Conceptual Analysis

    ERIC Educational Resources Information Center

    Penlington, Clare

    2008-01-01

    Teacher-teacher dialogue is a central activity within many professional learning programs. Understanding how and why dialogue works as an effective tool for teacher change is a question, however, that needs more careful probing in the extant literature. In this paper, I draw upon the philosophical theory of practical reason in order to show why…

  16. A Case Study of Epistemic Order in Mathematics Classroom Dialogue

    ERIC Educational Resources Information Center

    Ruthven, Kenneth; Hofmann, Riikka

    2016-01-01

    We define epistemic order as the way in which the exchange and development of knowledge takes place in the classroom, breaking this down into a system of three components: epistemic initiative relating to who sets the agenda in classroom dialogue, and how; epistemic appraisal relating to who judges contributions to classroom dialogue, and how; and…

  17. Health dialogues between pupils and school nurses: a description of the verbal interaction.

    PubMed

    Golsäter, Marie; Lingfors, Hans; Sidenvall, Birgitta; Enskär, Karin

    2012-11-01

    The purpose of this study was to explore and describe the content of and the verbal interaction in health dialogues between pupils and school nurses. Twenty-four health dialogues were recorded using a video camera and the conversations were analysed using the paediatric version of the Roter Interaction analysis system. The results showed that the age appropriate topics suggested by national recommendations were brought up in most of the health dialogues. The nurses were the ones who talked most, in terms of utterances. The pupils most frequently gave information about their lifestyle and agreed with the nurses' statements. The nurses summarised and checked that they had understood the pupils, asked closed-ended questions about lifestyle and gave information about lifestyle. Strategies aimed to make the pupil more active and participatory in the dialogues were the most widely used verbal interaction approaches by the nurses. The nurses' use of verbal interaction approaches to promote pupils' activity and participation, trying to build a partnership in the dialogue, could indicate an attempt to build patient-centred health dialogues. The nurses' great use of questions and being the ones leading the dialogues in terms of utterances point at the necessity for a nurses to have an openness to the pupils own narratives and an attentiveness to what he or she wants to talk about. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    ERIC Educational Resources Information Center

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  19. Computer Simulation of Human Service Program Evaluations.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; Davis, James E.

    1985-01-01

    Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…

  20. Rhesus macaques recognize unique multi-modal face-voice relations of familiar individuals and not of unfamiliar ones

    PubMed Central

    Habbershon, Holly M.; Ahmed, Sarah Z.; Cohen, Yale E.

    2013-01-01

    Communication signals in non-human primates are inherently multi-modal. However, for laboratory-housed monkeys, there is relatively little evidence in support of the use of multi-modal communication signals in individual recognition. Here, we used a preferential-looking paradigm to test whether laboratory-housed rhesus could “spontaneously” (i.e., in the absence of operant training) use multi-modal communication stimuli to discriminate between known conspecifics. The multi-modal stimulus was a silent movie of two monkeys vocalizing and an audio file of the vocalization from one of the monkeys in the movie. We found that the gaze patterns of those monkeys that knew the individuals in the movie were reliably biased toward the individual that did not produce the vocalization. In contrast, there was not a systematic gaze pattern for those monkeys that did not know the individuals in the movie. These data are consistent with the hypothesis that laboratory-housed rhesus can recognize and distinguish between conspecifics based on auditory and visual communication signals. PMID:23774779

  1. Facilitating Dialogues about Racial Realities

    ERIC Educational Resources Information Center

    Quaye, Stephen John

    2014-01-01

    Background/Context: Facilitating dialogues about racial issues in higher education classroom settings continues to be a vexing problem facing postsecondary educators. In order for students to discuss race with their peers, they need skilled facilitators who are knowledgeable about racial issues and able to support students in these difficult…

  2. Entropy growth in emotional online dialogues

    NASA Astrophysics Data System (ADS)

    Sienkiewicz, J.; Skowron, M.; Paltoglou, G.; Hołyst, Janusz A.

    2013-02-01

    We analyze emotionally annotated massive data from IRC (Internet Relay Chat) and model the dialogues between its participants by assuming that the driving force for the discussion is the entropy growth of emotional probability distribution.

  3. Creating Critical Conversations: Investigating the Utility of Socratic Dialogues in Elementary Social Studies Methods

    ERIC Educational Resources Information Center

    Buchanan, Lisa Brown

    2012-01-01

    This article explores the utility of Socratic dialogues in the elementary social studies methods course. Findings include preservice teachers' behaviors during dialogues, perceived strengths and challenges of using Socratic dialogues in teacher education, and the impact on student learning. Challenges and apprehensions encountered by the teacher…

  4. Socrates Lives: Dialogue as a Means of Teaching and Learning

    ERIC Educational Resources Information Center

    Moberg, Eric M.

    2008-01-01

    The purpose of this paper is to argue for the ongoing use of dialogue as a modern pedagogical and andragogical method. The author reviewed 18 scholarly sources from three education databases in this literature review. The use of dialogue as mode of instruction dates from the Socratic Method of 399 B.C.E. to present uses. The literature reveals…

  5. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    NASA Astrophysics Data System (ADS)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  6. Analyzing a multimodal biometric system using real and virtual users

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Vielhauer, Claus

    2007-02-01

    Three main topics of recent research on multimodal biometric systems are addressed in this article: The lack of sufficiently large multimodal test data sets, the influence of cultural aspects and data protection issues of multimodal biometric data. In this contribution, different possibilities are presented to extend multimodal databases by generating so-called virtual users, which are created by combining single biometric modality data of different users. Comparative tests on databases containing real and virtual users based on a multimodal system using handwriting and speech are presented, to study to which degree the use of virtual multimodal databases allows conclusions with respect to recognition accuracy in comparison to real multimodal data. All tests have been carried out on databases created from donations from three different nationality groups. This allows to review the experimental results both in general and in context of cultural origin. The results show that in most cases the usage of virtual persons leads to lower accuracy than the usage of real users in terms of the measurement applied: the Equal Error Rate. Finally, this article will address the general question how the concept of virtual users may influence the data protection requirements for multimodal evaluation databases in the future.

  7. Computer Human Interaction for Image Information Systems.

    ERIC Educational Resources Information Center

    Beard, David Volk

    1991-01-01

    Presents an approach to developing viable image computer-human interactions (CHI) involving user metaphors for comprehending image data and methods for locating, accessing, and displaying computer images. A medical-image radiology workstation application is used as an example, and feedback and evaluation methods are discussed. (41 references) (LRW)

  8. Filter. Remix. Make.: Cultivating Adaptability through Multimodality

    ERIC Educational Resources Information Center

    Dusenberry, Lisa; Hutter, Liz; Robinson, Joy

    2015-01-01

    This article establishes traits of adaptable communicators in the 21st century, explains why adaptability should be a goal of technical communication educators, and shows how multimodal pedagogy supports adaptability. Three examples of scalable, multimodal assignments (infographics, research interviews, and software demonstrations) that evidence…

  9. Student Performance in Computer-Assisted Instruction in Programming.

    ERIC Educational Resources Information Center

    Friend, Jamesine E.; And Others

    A computer-assisted instructional system to teach college students the computer language, AID (Algebraic Interpretive Dialogue), two control programs, and data collected by the two control programs are described. It was found that although first response errors were often those of AID syntax, such errors were easily corrected. Secondly, while…

  10. 75 FR 39935 - Drinking Water Strategy Contaminants as Group(s)-Notice of Web Dialogue

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... Group(s)--Notice of Web Dialogue AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY... principles. The purpose of this notice is to announce that EPA will host a Web dialogue. The discussion topics for this Web dialogue are focused on the first of the four principles, addressing some...

  11. The power of wholeness, consciousness, and caring a dialogue on nursing science, art, and healing.

    PubMed

    Cowling, W Richard; Smith, Marlaine C; Watson, Jean

    2008-01-01

    Wholeness, consciousness, and caring are 3 critical concepts singled out and positioned in the disciplinary discourse of nursing to distinguish it from other disciplines. This article is an outgrowth of a dialogue among 4 scholars, 3 who have participated extensively in work aimed at synthesizing converging points in nursing theory development. It proposes a unified vision of nursing knowledge that builds on their work as a reference point for extending reflection and dialogue about the discipline of nursing. We seek for an awakening of a higher/deeper place of wholeness, consciousness, and caring that will synthesize new ethical and intellectual forms and norms of "ontological caring literacy" to arrive at a unitary caring science praxis. We encourage the evolution of a mature caring-healing-health discipline and profession, helping affirm and sustain humanity, caring, and wholeness in our daily work and in the world.

  12. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  13. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  14. A multimodal MRI dataset of professional chess players.

    PubMed

    Li, Kaiming; Jiang, Jing; Qiu, Lihua; Yang, Xun; Huang, Xiaoqi; Lui, Su; Gong, Qiyong

    2015-01-01

    Chess is a good model to study high-level human brain functions such as spatial cognition, memory, planning, learning and problem solving. Recent studies have demonstrated that non-invasive MRI techniques are valuable for researchers to investigate the underlying neural mechanism of playing chess. For professional chess players (e.g., chess grand masters and masters or GM/Ms), what are the structural and functional alterations due to long-term professional practice, and how these alterations relate to behavior, are largely veiled. Here, we report a multimodal MRI dataset from 29 professional Chinese chess players (most of whom are GM/Ms), and 29 age matched novices. We hope that this dataset will provide researchers with new materials to further explore high-level human brain functions.

  15. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  16. The Play of Socratic Dialogue

    ERIC Educational Resources Information Center

    Smith, Richard

    2011-01-01

    Proponents of philosophy for children generally see themselves as heirs to the "Socratic" tradition. They often claim too that children's aptitude for play leads them naturally to play with abstract, philosophical ideas. However in Plato's dialogues we find in the mouth of "Socrates" many warnings against philosophising with the young. Those…

  17. Rotational electrical impedance tomography using electrodes with limited surface coverage provides window for multimodal sensing

    NASA Astrophysics Data System (ADS)

    Lehti-Polojärvi, Mari; Koskela, Olli; Seppänen, Aku; Figueiras, Edite; Hyttinen, Jari

    2018-02-01

    Electrical impedance tomography (EIT) is an imaging method that could become a valuable tool in multimodal applications. One challenge in simultaneous multimodal imaging is that typically the EIT electrodes cover a large portion of the object surface. This paper investigates the feasibility of rotational EIT (rEIT) in applications where electrodes cover only a limited angle of the surface of the object. In the studied rEIT, the object is rotated a full 360° during a set of measurements to increase the information content of the data. We call this approach limited angle full revolution rEIT (LAFR-rEIT). We test LAFR-rEIT setups in two-dimensional geometries with computational and experimental data. We use up to 256 rotational measurement positions, which requires a new way to solve the forward and inverse problem of rEIT. For this, we provide a modification, available for EIDORS, in the supplementary material. The computational results demonstrate that LAFR-rEIT with eight electrodes produce the same image quality as conventional 16-electrode rEIT, when data from an adequate number of rotational measurement positions are used. Both computational and experimental results indicate that the novel LAFR-rEIT provides good EIT with setups with limited surface coverage and a small number of electrodes.

  18. A multimodal spectral approach to characterize rhythm in natural speech.

    PubMed

    Alexandrou, Anna Maria; Saarinen, Timo; Kujala, Jan; Salmelin, Riitta

    2016-01-01

    Human utterances demonstrate temporal patterning, also referred to as rhythm. While simple oromotor behaviors (e.g., chewing) feature a salient periodical structure, conversational speech displays a time-varying quasi-rhythmic pattern. Quantification of periodicity in speech is challenging. Unimodal spectral approaches have highlighted rhythmic aspects of speech. However, speech is a complex multimodal phenomenon that arises from the interplay of articulatory, respiratory, and vocal systems. The present study addressed the question of whether a multimodal spectral approach, in the form of coherence analysis between electromyographic (EMG) and acoustic signals, would allow one to characterize rhythm in natural speech more efficiently than a unimodal analysis. The main experimental task consisted of speech production at three speaking rates; a simple oromotor task served as control. The EMG-acoustic coherence emerged as a sensitive means of tracking speech rhythm, whereas spectral analysis of either EMG or acoustic amplitude envelope alone was less informative. Coherence metrics seem to distinguish and highlight rhythmic structure in natural speech.

  19. Radiation patterns of multimode feed-horn-coupled bolometers for FAR-IR space applications

    NASA Astrophysics Data System (ADS)

    Kalinauskaite, Eimante; Murphy, J. Anthony; McAuley, Ian; Trappe, Neal A.; McCarthy, Darragh N.; Bracken, Colm P.; Doherty, Stephen; Gradziel, Marcin L.; O'Sullivan, Créidhe; Wilson, Daniel; Peacocke, Tully; Maffei, Bruno; Lamarre, Jean-Michel; Ade, Peter A. R.; Savini, Giorgio

    2017-02-01

    A multimode horn differs from a single mode horn in that it has a larger sized waveguide feeding it. Multimode horns can therefore be utilized as high efficiency feeds for bolometric detectors, providing increased throughput and sensitivity over single mode feeds, while also ensuring good control of the beam pattern characteristics. Although a cavity mounted bolometer can be modelled as a perfect black body radiator (using reciprocity in order to calculate beam patterns), nevertheless, this is an approximation. In this paper we present how this approach can be improved to actually include the cavity coupled bolometer, now modelled as a thin absorbing film. Generally, this is a big challenge for finite element software, in that the structures are typically electrically large. However, the radiation pattern of multimode horns can be more efficiently simulated using mode matching, typically with smooth-walled waveguide modes as the basis and computing an overall scattering matrix for the horn-waveguide-cavity system. Another issue on the optical efficiency of the detectors is the presence of any free space gaps, through which power can escape. This is best dealt with treating the system as an absorber. Appropriate reflection and transmission matrices can be determined for the cavity using the natural eigenfields of the bolometer cavity system. We discuss how the approach can be applied to proposed terahertz systems, and also present results on how the approach was applied to improve beam pattern predictions on the sky for the multi-mode HFI 857GHz channel on Planck.

  20. The Next Chapter: Continuing the Dialogue

    ERIC Educational Resources Information Center

    Lund, Jacalyn

    2016-01-01

    This article is the next chapter in the conversation about doctoral physical education teacher education (D-PETE) programs. The author challenges PETE faculty members to continue the dialogue started in this special issue about D-PETE programs.

  1. Multimodal Narrative Inquiry: Six Teacher Candidates Respond

    ERIC Educational Resources Information Center

    Morawski, Cynthia M.; Rottmann, Jennifer

    2016-01-01

    In this paper we present findings of a study on the implementation of a multimodal teacher narrative inquiry component, theoretically grounded by Rosenblatt's theory of transaction analysis, methodologically supported by action research and practically enacted by narrative inquiry and multimodal learning. In particular, the component offered…

  2. New methods of multimode fiber interferometer signal processing

    NASA Astrophysics Data System (ADS)

    Vitrik, Oleg B.; Kulchin, Yuri N.; Maxaev, Oleg G.; Kirichenko, Oleg V.; Kamenev, Oleg T.; Petrov, Yuri S.

    1995-06-01

    New methods of multimode fiber interferometers signal processing are suggested. For scheme of single fiber multimode interferometers with two excited modes, the method based on using of special fiber unit is developed. This unit provides the modes interaction and further sum optical field filtering. As a result the amplitude of output signal is modulated by external influence on interferometer. The stabilization of interferometer sensitivity is achieved by using additional special modulation of output signal. For scheme of single fiber multimode interferometers with excitation of wide mode spectrum, the signal of intermode interference is registered by photodiode matrix and then special electronic unit performs correlation processing. For elimination of temperature destabilization, the registered signal is adopted to multimode interferometers optical signal temperature changes. The achieved parameters for double mode scheme: temporary stability--0.6% per hour, sensitivity to interferometer length deviations--3,2 nm; for multimode scheme: temperature stability--(0.5%)/(K), temporary nonstability--0.2% per hour, sensitivity to interferometer length deviations--20 nm, dynamic range--35 dB.

  3. Daisaku Ikeda and Value-Creative Dialogue: A New Current in Interculturalism and Educational Philosophy

    ERIC Educational Resources Information Center

    Goulah, Jason

    2012-01-01

    This article focuses on Daisaku Ikeda's (1928- ) philosophy and practice of intercultural dialogue--what I call "value-creative dialogue"--as a new current in interculturalism and educational philosophy and theory. I use excerpts from Ikeda's writings to consider two aspects of his approach to dialogue. First, I locate his approach…

  4. Valuation and handling of dialogue in leadership: a grounded theory study in Swedish hospitals.

    PubMed

    Grill, C; Ahlborg, G; Lindgren, E C

    2011-01-01

    Leadership can positively affect the work environment and health. Communication and dialogue are an important part in leadership. Studies of how dialogue is valued and handled in first-line leadership have not so far been found. The aim of this study is to develop a theoretical understanding of how first-line leaders at hospitals in western Sweden value and handle dialogue in the organisation. The study design was explorative and based on grounded theory. Data collection consisted of interviews and observations. A total of 11 first-line leaders at two hospitals in western Sweden were chosen as informants, and for four of them observation was also used. One core category emerged in the analysis: leaders' communicative actions, which could be strategically or understanding-oriented, and experienced as equal or unequal and performed equitably or inequitably, within a power relationship. Four different types of communicativeactions emerged: collaborative, nurturing, controlling, and confrontational. Leaders had strategies for creating arenas and relationships for dialogue, but dialogue could be constrained by external circumstances or ignorance of the frameworks needed to conduct and accomplish dialogue. First-line leaders should be offered guidance in understanding the consequences of consciously choosing and strengthening the communication component in leadership. The positive valuation of dialogue was not always manifest in practical action. One significant consequence of not using dialogue was that information with impact on organisational efficiency and finances was communicated upwards in the management system.

  5. Theoretical Bridge-Building: The Career Development Project for the 21st Century Meets the New Era of Human Resource Development

    ERIC Educational Resources Information Center

    Cameron, Roslyn

    2009-01-01

    There are theoretical and disciplinary field links between career development and human resource development, however interdisciplinary dialogue between the two fields has been essentially limited to one-way dialogue. This one-way dialogue occurs from within the human resource development field, due to the explicit inclusion of career development…

  6. The two types of stethoscope systems for respiration system diagnostics of the human body

    NASA Astrophysics Data System (ADS)

    Abashkin, Vladimir; Achimova, Elena

    2003-12-01

    An acoustic multimode fiber optic sensors for medical diagnostics based upon the shutter principle has been elaborated with semiconductor laser diode as light source. The construction and the method of component preparation are described. Other type of stethoscope is electrical one. Both stethoscopes are four channels. The kinetics and dynamic vibrations and sounds of the human body can be detected, acquired and then processing by personal computer for medical diagnostics.

  7. SWAHILI, ADDITIONAL DIALOGUES TO FOLLOW "SWAHILI, AN ACTIVE INTRODUCTION, GENERAL CONVERSATION."

    ERIC Educational Resources Information Center

    INDAKWA, JOHN; AND OTHERS

    THESE SUPPLEMENTARY DIALOGUES WERE DESIGNED TO FOLLOW THE FOREIGN SERVICE INSTITUTE TEXT "AN ACTIVE INTRODUCTION TO SWAHILI, GENERAL CONVERSATION". BASED ON THE TOPICS PRESENTED IN THE TEXT, THE DIALOGUES APPEAR IN PHONEMIC TRANSCRIPTION OF SWAHILI IN THE LEFT-HAND COLUMN, WITH ENGLISH TRANSLATION ON THE RIGHT. (AM)

  8. Enhancing resource coordination for multi-modal evacuation planning.

    DOT National Transportation Integrated Search

    2013-01-01

    This research project seeks to increase knowledge about coordinating effective multi-modal evacuation for disasters. It does so by identifying, evaluating, and assessing : current transportation management approaches for multi-modal evacuation planni...

  9. Brief Survey of TSC Computing Facilities

    DOT National Transportation Integrated Search

    1972-05-01

    The Transportation Systems Center (TSC) has four, essentially separate, in-house computing facilities. We shall call them Honeywell Facility, the Hybrid Facility, the Multimode Simulation Facility, and the Central Facility. In addition to these four,...

  10. Multimodal neuroelectric interface development

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Rosipal, Roman; Clanton, Sam T.; Matthews, Bryan; Hibbs, Andrew D.; Matthews, Robert; Krupka, Michael

    2003-01-01

    We are developing electromyographic and electroencephalographic methods, which draw control signals for human-computer interfaces from the human nervous system. We have made progress in four areas: 1) real-time pattern recognition algorithms for decoding sequences of forearm muscle activity associated with control gestures; 2) signal-processing strategies for computer interfaces using electroencephalogram (EEG) signals; 3) a flexible computation framework for neuroelectric interface research; and d) noncontact sensors, which measure electromyogram or EEG signals without resistive contact to the body.

  11. Multimodal EEG Recordings, Psychometrics and Behavioural Analysis.

    PubMed

    Boeijinga, Peter H

    2015-01-01

    High spatial and temporal resolution measurements of neuronal activity are preferably combined. In an overview on how this approach can take shape, multimodal electroencephalography (EEG) is treated in 2 main parts: by experiments without a task and in the experimentally cued working brain. It concentrates first on the alpha rhythm properties and next on data-driven search for patterns such as the default mode network. The high-resolution volumic distributions of neuronal metabolic indices result in distributed cortical regions and possibly relate to numerous nuclei, observable in a non-invasive manner in the central nervous system of humans. The second part deals with paradigms in which nowadays assessment of target-related networks can align level-dependent blood oxygenation, electrical responses and behaviour, taking the temporal resolution advantages of event-related potentials. Evidence-based electrical propagation in serial tasks during performance is now to a large extent attributed to interconnected pathways, particularly chronometry-dependent ones, throughout a chain including a dorsal stream, next ventral cortical areas taking the flow of information towards inferior temporal domains. The influence of aging is documented, and results of the first multimodal studies in neuropharmacology are consistent. Finally a scope on implementation of advanced clinical applications and personalized marker strategies in neuropsychiatry is indicated. © 2016 S. Karger AG, Basel.

  12. Faculty Teaching Diversity through Difficult Dialogues: Stories of Challenges and Success

    ERIC Educational Resources Information Center

    Gayles, Joy Gaston; Kelly, Bridget Turner; Grays, Shaefny; Zhang, Jing Jing; Porter, Kamaria P.

    2015-01-01

    Teaching diversity courses in graduate preparation programs is likely to trigger difficult dialogues that evoke a range of emotional responses. Difficult dialogues on diversity topics must be managed effectively in order to enhance multicultural competence. This interpretive study examined the experiences of faculty who teach diversity courses in…

  13. Multimodal Freight Distribution to Support Increased Port Operations

    DOT National Transportation Integrated Search

    2016-10-01

    To support improved port operations, three different aspects of multimodal freight distribution are investigated: (i) Efficient load planning for double stack trains at inland ports; (ii) Optimization of a multimodal network for environmental sustain...

  14. MULTIMODAL IMAGING OF SYPHILITIC MULTIFOCAL RETINITIS.

    PubMed

    Curi, Andre L; Sarraf, David; Cunningham, Emmett T

    2015-01-01

    To describe multimodal imaging of syphilitic multifocal retinitis. Observational case series. Two patients developed multifocal retinitis after treatment of unrecognized syphilitic uveitis with systemic corticosteroids in the absence of appropriate antibiotic therapy. Multimodal imaging localized the foci of retinitis within the retina in contrast to superficial retinal precipitates that accumulate on the surface of the retina in eyes with untreated syphilitic uveitis. Although the retinitis resolved after treatment with systemic penicillin in both cases, vision remained poor in the patient with multifocal retinitis involving the macula. Treatment of unrecognized syphilitic uveitis with corticosteroids in the absence of antitreponemal treatment can lead to the development of multifocal retinitis. Multimodal imaging, and optical coherence tomography in particular, can be used to distinguish multifocal retinitis from superficial retinal precipitates or accumulations.

  15. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.

    PubMed

    Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C

    2016-03-01

    Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.

  16. Racial dialogues: challenges faculty of color face in the classroom.

    PubMed

    Sue, Derald Wing; Rivera, David P; Watkins, Nicole L; Kim, Rachel H; Kim, Suah; Williams, Chantea D

    2011-07-01

    Research on the experiences of faculty of color in predominantly White institutions (PWIs) suggests that they often experience the campus climate as invalidating, alienating, and hostile. Few studies, however, have actually focused on the classroom experiences of faculty of color when difficult racial dialogues occur. Using Consensually Qualitative Research, eight faculty of color were interviewed about their experiences in the classroom when racially tinged topics arose. Three major findings emerged. First, difficult racial dialogues were frequently instigated by the presence of racial microaggressions delivered toward students of color or the professor. Dialogues on race were made more difficult when the classrooms were diverse, when heated emotions arose, when there was a strong fear of self-disclosure, and when racial perspectives differed. Second, all faculty experienced an internal struggle between balancing their own values and beliefs with an attempt to remain objective. This conflict was often described as exhausting and energy-depleting. Third, faculty of color described both successful and unsuccessful strategies in facilitating difficult dialogues on race that arose in the course of their teaching. These findings have major implications for how PWIs can develop new programs, policies, and practices that will aid and support colleagues of color.

  17. The Human-Computer Interface and Information Literacy: Some Basics and Beyond.

    ERIC Educational Resources Information Center

    Church, Gary M.

    1999-01-01

    Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…

  18. Deer, Dissension, and Dialogue: A University-Community Collaboration in Public Deliberation

    ERIC Educational Resources Information Center

    Wright, Wynne

    2009-01-01

    Michigan State University embarked upon an initiative to explore deliberative dialogue as a tool for addressing community-based contested issues in agriculture and natural resources. Our goal is to assess the extent to which deliberative dialogue can help "bridge the divides" among citizens and professionals and fulfill the land-grant…

  19. Crossing the Divide within Continental Philosophy: Reconstruction, Deconstruction, Dialogue and Education

    ERIC Educational Resources Information Center

    Papastephanou, Marianna

    2012-01-01

    In this article I explore some points of convergence between Habermas and Derrida that revolve around the intersection of ethical and epistemological issues in dialogue. After some preliminary remarks on how dialogue and language are viewed by Habermas and Derrida as standpoints for departing from the philosophy of consciousness and from…

  20. Staff and Student Experiences of Dialogue Days, a Student Engagement Activity

    ERIC Educational Resources Information Center

    Asghar, Mandy

    2016-01-01

    This paper reports the findings from a descriptive phenomenological exploration of the lived experience of dialogue days, a student engagement activity, from the perspectives of staff and students. I suggest that dialogue days enhance the relational and emotional aspects of learning with the potential to impact on future student engagement and…